I have an existing table (see below).
----
[FormCode] [varchar] (4) NULL ,
[FiscalYear] [char] (4) NULL
----
I want to add the column below after the [FormCode] when my SPROC runs.
----
[FiscalMonth] [char] (2) NULL
----
Any ideas would be a big help?
TIF--use this to add the column
alter table MyTable
add FiscalMonth char (2)
go
--and this to drop the column
alter table MyTable
drop column FiscalMonth
go
Cheers|||Thanks for your response, however, I'm actually after adding a column in between existing columns.
So in this case, my new column FISCALMONTH will be added between FORMCODE and FISCALYEAR.
Tnx|||Why is important to have the ordinal position of your column correct?|||The quickest and easiest (at least in most cases) way to "insert" columns into a table is to put the columns wherever they fall and construct a view to order them the way you want them.
In relational algebra, columns have no order. In relational databases, the order of columns should be considered an anomoly, not an attribute.
A view on the other hand is a template for a result set, and columns do have an order in a result set.
-PatP|||I don't understand eather... Why do you need them in a specific order?|||Did you ever get an answer to this? I know that you can create a view to order your columns but it would be nice to do this in the table. No it doesn't matter from a DB perspective but it is cleaner if you are dealing with many columns.|||Why don't you go into design view of a table in Enterprise Manager, make your changes, and save the script.
I would also summarize that ALTER TABLE anything in SQL server produces ineffeciencies at the page level...
Read Nigel's great article on the subject
http://www.mindsdoor.net/SQLAdmin/AlterTableProblems.html
Showing posts with label varchar. Show all posts
Showing posts with label varchar. Show all posts
Friday, March 30, 2012
Monday, March 26, 2012
Insert With Text Based on Top Row?
I have a table with 3 columns, a varchar, a text, and an int64 (for
PK).
I need to take the text value from highest-numbered int64'd row, and
insert a new row with a new varchar value.
CREATE TABLE [TestTbl] (
[MyVarChar] [varchar] (128) COLLATE
SQL_Latin1_General_CP1_CI_AS NULL ,
[MyText] [text] COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[TestTbl_Id] [int] IDENTITY (1, 2) NOT NULL ,
)
Insert into TestTbl(MyVarChar,MyText) Values ('a','aaaa')
-- This does not work, how should I word it?
insert into TestTbl( MyVarChar,MyText)
values ( 'bb' , (select top 1 MyText from TestTbl order by TestTbl_Id
desc) )
Thanks.> insert into TestTbl( MyVarChar,MyText)
> values ( 'bb' , (select top 1 MyText from TestTbl order by TestTbl_Id
> desc) )
How about :
INSERT TestTbl(MyVarChar, MyText)
SELECT TOP 1 'bb', MyText
FROM TestTbl
ORDER BY TestTbl_ID DESC
Or maybe you could explain why you need to redundantly repeat the same text
from the previous "most recently inserted" row.|||Thanks for the proper syntax.
Column MyVarChar contains a version information string and MyText
contains an XML document. When the version of the front-end
applcaction is changed, we want to copy the data to match the new
version while keeping the older version in "parallel existence". Then
we can compare program operation between different front ends (by
doing a select where MyVarChar='versionstring') and if things look
good we can just use the newer one.
Thanks again for the help.
On Mon, 12 Sep 2005 13:40:21 -0400, "Aaron Bertrand [SQL Server MVP]"
<ten.xoc@.dnartreb.noraa> wrote:
>How about :
>INSERT TestTbl(MyVarChar, MyText)
> SELECT TOP 1 'bb', MyText
> FROM TestTbl
> ORDER BY TestTbl_ID DESC
>Or maybe you could explain why you need to redundantly repeat the same text
>from the previous "most recently inserted" row.
>
Friday, March 9, 2012
Insert running slow
Are inserts really slow in 2005 or am I doing something stupid?
Here's the table:
CREATE TABLE [dbo].[Tickets](
[ticket] [varchar](20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[data] [varchar](1000) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[added] [datetime] NOT NULL CONSTRAINT [DF_Tickets_added] DEFAULT
(getdate()),
[lastUpdated] [datetime] NOT NULL,
CONSTRAINT [PK_Tickets] PRIMARY KEY CLUSTERED
(
[ticket] ASC
)WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
I wrote this stored procedure:
ALTER PROCEDURE [dbo].[CreateTicket]
@.ticket varchar(20) OUTPUT,
@.data varchar(2000)
AS
DECLARE @.key varchar(20)
DECLARE @.len int
DECLARE @.added bit
DECLARE @.cypher varchar(52)
SET NOCOUNT ON;
SET @.added=0
SET @.cypher='abcdefghijklmnopqrstuvwxyzABCDE
FGHIJKLMNOPQRSTUVWXYZ0123456789'
WHILE @.added=0
BEGIN
SELECT @.key='', @.len=20
WHILE @.len>0
BEGIN
-- The following line is the SLOW one!!!
SET @.key = @.key + SUBSTRING(@.cypher, CAST(FLOOR(RAND()*52) AS int)+1,1)
SET @.len = @.len -1
END
IF NOT EXISTS(SELECT 1 FROM Tickets WHERE ticket=@.key)
BEGIN
INSERT INTO Tickets (ticket, data,lastupdated) VALUES(@.key, @.data,GETDATE())
SET @.ticket = @.key
SET @.added = 1
END
END
And then used this to test it's speed:
DECLARE @.ticket varchar(20)
DECLARE @.sec datetime
DECLARE @.cnt int
TRUNCATE TABLE Tickets
SET @.cnt = 0
SET @.sec = DATEADD(second, 1, GETDATE())
WHILE GETDATE()<@.sec
BEGIN
EXEC CreateTicket @.ticket, 'this is a test'
SET @.cnt=@.cnt + 1
END
PRINT @.cnt
When I run this on SQL 2000, I get roughly 3000 records a second. When I
run it against 2005 I get roughly 160 records per second! The statement tha
t
is taking all the time in 2005 is the insert statement!
On 2005 if I comment it out I can execute 8,600ish loops per second. If it
isn't commented out I run 160ish.
On 2000 if I comment it out I can execute 5,900 loops per second, If it
isn't commented out I run 3,000ish.
Is inserting really that expensive or am I missing some knob I forgot to tur
n?Never mind. It appears there's something wrong with the server I was testin
g
on. Testing on another server I got 5,600ish inserts per second. What's
really weird though is the box that's performing slowing is a faster box tha
n
either of the other two with faster disks. Guess it's time to reinstall :)|||Before reinstalling, I would check perfmon and profiler and see what is
actually taking so long. It might be something easy to fix (or it might be
that reinstalling would cause the same performance problems.) A reinstall
might be in order, but unless that is really easy to do for you, it is
probably just something in how something is set up.
----
Louis Davidson - http://spaces.msn.com/members/drsql/
SQL Server MVP
"Arguments are to be avoided: they are always vulgar and often convincing."
(Oscar Wilde)
"Larry Charlton" <LarryCharlton@.discussions.microsoft.com> wrote in message
news:A378F5F2-63BC-4FE0-AD6D-DFBB2952BB06@.microsoft.com...
> Never mind. It appears there's something wrong with the server I was
> testing
> on. Testing on another server I got 5,600ish inserts per second. What's
> really weird though is the box that's performing slowing is a faster box
> than
> either of the other two with faster disks. Guess it's time to reinstall
> :)
>
Here's the table:
CREATE TABLE [dbo].[Tickets](
[ticket] [varchar](20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[data] [varchar](1000) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[added] [datetime] NOT NULL CONSTRAINT [DF_Tickets_added] DEFAULT
(getdate()),
[lastUpdated] [datetime] NOT NULL,
CONSTRAINT [PK_Tickets] PRIMARY KEY CLUSTERED
(
[ticket] ASC
)WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
I wrote this stored procedure:
ALTER PROCEDURE [dbo].[CreateTicket]
@.ticket varchar(20) OUTPUT,
@.data varchar(2000)
AS
DECLARE @.key varchar(20)
DECLARE @.len int
DECLARE @.added bit
DECLARE @.cypher varchar(52)
SET NOCOUNT ON;
SET @.added=0
SET @.cypher='abcdefghijklmnopqrstuvwxyzABCDE
FGHIJKLMNOPQRSTUVWXYZ0123456789'
WHILE @.added=0
BEGIN
SELECT @.key='', @.len=20
WHILE @.len>0
BEGIN
-- The following line is the SLOW one!!!
SET @.key = @.key + SUBSTRING(@.cypher, CAST(FLOOR(RAND()*52) AS int)+1,1)
SET @.len = @.len -1
END
IF NOT EXISTS(SELECT 1 FROM Tickets WHERE ticket=@.key)
BEGIN
INSERT INTO Tickets (ticket, data,lastupdated) VALUES(@.key, @.data,GETDATE())
SET @.ticket = @.key
SET @.added = 1
END
END
And then used this to test it's speed:
DECLARE @.ticket varchar(20)
DECLARE @.sec datetime
DECLARE @.cnt int
TRUNCATE TABLE Tickets
SET @.cnt = 0
SET @.sec = DATEADD(second, 1, GETDATE())
WHILE GETDATE()<@.sec
BEGIN
EXEC CreateTicket @.ticket, 'this is a test'
SET @.cnt=@.cnt + 1
END
PRINT @.cnt
When I run this on SQL 2000, I get roughly 3000 records a second. When I
run it against 2005 I get roughly 160 records per second! The statement tha
t
is taking all the time in 2005 is the insert statement!
On 2005 if I comment it out I can execute 8,600ish loops per second. If it
isn't commented out I run 160ish.
On 2000 if I comment it out I can execute 5,900 loops per second, If it
isn't commented out I run 3,000ish.
Is inserting really that expensive or am I missing some knob I forgot to tur
n?Never mind. It appears there's something wrong with the server I was testin
g
on. Testing on another server I got 5,600ish inserts per second. What's
really weird though is the box that's performing slowing is a faster box tha
n
either of the other two with faster disks. Guess it's time to reinstall :)|||Before reinstalling, I would check perfmon and profiler and see what is
actually taking so long. It might be something easy to fix (or it might be
that reinstalling would cause the same performance problems.) A reinstall
might be in order, but unless that is really easy to do for you, it is
probably just something in how something is set up.
----
Louis Davidson - http://spaces.msn.com/members/drsql/
SQL Server MVP
"Arguments are to be avoided: they are always vulgar and often convincing."
(Oscar Wilde)
"Larry Charlton" <LarryCharlton@.discussions.microsoft.com> wrote in message
news:A378F5F2-63BC-4FE0-AD6D-DFBB2952BB06@.microsoft.com...
> Never mind. It appears there's something wrong with the server I was
> testing
> on. Testing on another server I got 5,600ish inserts per second. What's
> really weird though is the box that's performing slowing is a faster box
> than
> either of the other two with faster disks. Guess it's time to reinstall
> :)
>
Wednesday, March 7, 2012
INSERT Record through a view
Is is possible to insert a record through a view. If so, how?
USE Northwind
GO
CREATE TABLE tbForms (
FormID INT IDENTITY (1,1) NOT NULL,
Form varchar (100) NOT NULL
)
GO
ALTER TABLE tbForms
ADD CONSTRAINT tbForms_pk PRIMARY KEY (FormID)
GO
CREATE TABLE tbDoubleTeeForms (
fkFormID INT NOT NULL,
Form varchar(100) NOT NULL,
Width FLOAT,
Height FLOAT,
Flange FLOAT,
Leg FLOAT,
LegCount INT
)
GO
ALTER TABLE tbDoubleTeeForms
ADD CONSTRAINT tbDoubleTeeForms_pk PRIMARY KEY (fkFormID)
GO
ALTER TABLE tbDoubleTeeForms
ADD CONSTRAINT tbDoubleTeeForms_fk FOREIGN KEY (fkFormID)
REFERENCES tbForms (FormID)
GO
CREATE TABLE tbFlatPanelForms (
fkFormID INT NOT NULL,
Form varchar(100) NOT NULL,
Width FLOAT,
HEIGHT FLOAT
)
GO
ALTER TABLE tbFlatPanelForms
ADD CONSTRAINT tbFlatPanelForms_pk PRIMARY KEY (fkFormID)
GO
ALTER TABLE tbFlatPanelForms
ADD CONSTRAINT tbFlatPanelForms_fk FOREIGN KEY (fkFormID)
REFERENCES tbForms (FormID)
GO
CREATE VIEW MyProducts AS
SELECT fkFormID, Form FROM tbDoubleTeeForms UNION ALL
SELECT fkFormID, FOrm FROM tbFlatPanelForms
GO
-- How can I insert a new record, the pk of the forms table is identity.
-- Can this be done?
INSERT INTO MyProducts (Form)
VALUES ('My First Entry')
GO
SELECT * FROM MyProducts
GO
DROP VIEW MyProducts
GO
DROP TABLE tbFlatPanelForms
GO
DROP TABLE tbDoubleTeeForms
GO
DROP TABLE tbForms
GO
Mike Bithink it's imposible to do with view with union
in your case you woudl like to insert data into 3 tables
maybe tray insert data into 2 tables and triger to put data into 3-th table|||you can insert into a view but you can only affect one table.
so in the case of unions this is not possible
BUT......
you can however use an instead of trigger to check for which table the insert is going to and then instead of inserting through the view, you insert directly to the correct table.|||BOL:
Updatable Partitioned Views
If a local or distributed partitioned view is not updatable, it can serve only as a read-only copy of the original table. An updatable partitioned view can exhibit all the capabilities of the original table.
A view is considered an updatable partitioned view if:
The view is a set of SELECT statements whose individual result sets are combined into one using the UNION ALL statement. Each individual SELECT statement references one SQL Server base table. The table can be either a local table or a linked table referenced using a four-part name, the OPENROWSET function, or the OPENDATASOURCE function (you cannot use an OPENDATASOURCE or OPENROWSET function that specifies a pass-through query).
The view will not be updatable if a trigger or cascading update or delete is defined on one or more member tables.|||rdjabarov
no partition mentioned. so went with the conservative option
hey
how about some liquor this friday?|||Scott,
I am accompanying my daughter's class for the trip to NASA in Houston this evening. We're coming back on Friday night. But I hope it's gonna be shortly after noon, not at night. Will let you know.|||Originally posted by rdjabarov
Scott,
I am accompanying my daughter's class for the trip to NASA in Houston this evening. We're coming back on Friday night. But I hope it's gonna be shortly after noon, not at night. Will let you know. Now THAT's a road-trip! Funny, driving across Texas takes a lot longer than driving across Illinois, doesn't it?
-PatP|||Hell...he'll be still backing out of his driveway by the time I get across NJ
:D
and a shamless 2500th post...
And you should look into partitioned views...the contraints have to be very specific...
but updating the base table is the best performing method...
for the view, the optimizer will still look at a tables in the view...|||Originally posted by Brett Kaiser
and a shamless 2500th post... Uff da! That's a lot of postings! Congratulations.
I still think you should have posted #2500 into the Yak Corral!
-PatP|||Originally posted by Pat Phelan
Now THAT's a road-trip! Funny, driving across Texas takes a lot longer than driving across Illinois, doesn't it?
-PatP
Talking about long road trips .. Three years ago I drove all the way from the east coast to the west coast .. (NC to CA) .. and man! .. I thought it took me an eternity to drive across Texas!|||Yeah, I had a car like that once...
;)
-PatP|||northern florida absolutely sucks
normally when you drive you will guage how long you have to go by the # of exits
for example when you get on a highway and your directions say to get off the highway at exit 120 and you are at exit 100.
normally that should take no time
but in northern FL the exits are 20 to 30 miles apart.
eternity ensues|||Originally posted by GDMI
Talking about long road trips .. Three years ago I drove all the way from the east coast to the west coast .. (NC to CA) .. and man! .. I thought it took me an eternity to drive across Texas!
Over the christmas holidays I drove a plymoth breeze from Windsor Ontario Canada -> South Padre Island, Texas -> Orlando Florida -> Back to Windsor
Just under 7000 kilometers if I remember correctly.
Great time though!
Mike B|||Originally posted by Ruprect
northern florida absolutely sucks
normally when you drive you will guage how long you have to go by the # of exits
for example when you get on a highway and your directions say to get off the highway at exit 120 and you are at exit 100.
normally that should take no time
but in northern FL the exits are 20 to 30 miles apart.
eternity ensues The best I can come up with is "Well duh!"
Why put the exits closer together? Who would want to get off? If you think that the highway inhales vigorously, you should have tried getting off the highway somewhere in northern Florida!
-PatP
USE Northwind
GO
CREATE TABLE tbForms (
FormID INT IDENTITY (1,1) NOT NULL,
Form varchar (100) NOT NULL
)
GO
ALTER TABLE tbForms
ADD CONSTRAINT tbForms_pk PRIMARY KEY (FormID)
GO
CREATE TABLE tbDoubleTeeForms (
fkFormID INT NOT NULL,
Form varchar(100) NOT NULL,
Width FLOAT,
Height FLOAT,
Flange FLOAT,
Leg FLOAT,
LegCount INT
)
GO
ALTER TABLE tbDoubleTeeForms
ADD CONSTRAINT tbDoubleTeeForms_pk PRIMARY KEY (fkFormID)
GO
ALTER TABLE tbDoubleTeeForms
ADD CONSTRAINT tbDoubleTeeForms_fk FOREIGN KEY (fkFormID)
REFERENCES tbForms (FormID)
GO
CREATE TABLE tbFlatPanelForms (
fkFormID INT NOT NULL,
Form varchar(100) NOT NULL,
Width FLOAT,
HEIGHT FLOAT
)
GO
ALTER TABLE tbFlatPanelForms
ADD CONSTRAINT tbFlatPanelForms_pk PRIMARY KEY (fkFormID)
GO
ALTER TABLE tbFlatPanelForms
ADD CONSTRAINT tbFlatPanelForms_fk FOREIGN KEY (fkFormID)
REFERENCES tbForms (FormID)
GO
CREATE VIEW MyProducts AS
SELECT fkFormID, Form FROM tbDoubleTeeForms UNION ALL
SELECT fkFormID, FOrm FROM tbFlatPanelForms
GO
-- How can I insert a new record, the pk of the forms table is identity.
-- Can this be done?
INSERT INTO MyProducts (Form)
VALUES ('My First Entry')
GO
SELECT * FROM MyProducts
GO
DROP VIEW MyProducts
GO
DROP TABLE tbFlatPanelForms
GO
DROP TABLE tbDoubleTeeForms
GO
DROP TABLE tbForms
GO
Mike Bithink it's imposible to do with view with union
in your case you woudl like to insert data into 3 tables
maybe tray insert data into 2 tables and triger to put data into 3-th table|||you can insert into a view but you can only affect one table.
so in the case of unions this is not possible
BUT......
you can however use an instead of trigger to check for which table the insert is going to and then instead of inserting through the view, you insert directly to the correct table.|||BOL:
Updatable Partitioned Views
If a local or distributed partitioned view is not updatable, it can serve only as a read-only copy of the original table. An updatable partitioned view can exhibit all the capabilities of the original table.
A view is considered an updatable partitioned view if:
The view is a set of SELECT statements whose individual result sets are combined into one using the UNION ALL statement. Each individual SELECT statement references one SQL Server base table. The table can be either a local table or a linked table referenced using a four-part name, the OPENROWSET function, or the OPENDATASOURCE function (you cannot use an OPENDATASOURCE or OPENROWSET function that specifies a pass-through query).
The view will not be updatable if a trigger or cascading update or delete is defined on one or more member tables.|||rdjabarov
no partition mentioned. so went with the conservative option
hey
how about some liquor this friday?|||Scott,
I am accompanying my daughter's class for the trip to NASA in Houston this evening. We're coming back on Friday night. But I hope it's gonna be shortly after noon, not at night. Will let you know.|||Originally posted by rdjabarov
Scott,
I am accompanying my daughter's class for the trip to NASA in Houston this evening. We're coming back on Friday night. But I hope it's gonna be shortly after noon, not at night. Will let you know. Now THAT's a road-trip! Funny, driving across Texas takes a lot longer than driving across Illinois, doesn't it?
-PatP|||Hell...he'll be still backing out of his driveway by the time I get across NJ
:D
and a shamless 2500th post...
And you should look into partitioned views...the contraints have to be very specific...
but updating the base table is the best performing method...
for the view, the optimizer will still look at a tables in the view...|||Originally posted by Brett Kaiser
and a shamless 2500th post... Uff da! That's a lot of postings! Congratulations.
I still think you should have posted #2500 into the Yak Corral!
-PatP|||Originally posted by Pat Phelan
Now THAT's a road-trip! Funny, driving across Texas takes a lot longer than driving across Illinois, doesn't it?
-PatP
Talking about long road trips .. Three years ago I drove all the way from the east coast to the west coast .. (NC to CA) .. and man! .. I thought it took me an eternity to drive across Texas!|||Yeah, I had a car like that once...
;)
-PatP|||northern florida absolutely sucks
normally when you drive you will guage how long you have to go by the # of exits
for example when you get on a highway and your directions say to get off the highway at exit 120 and you are at exit 100.
normally that should take no time
but in northern FL the exits are 20 to 30 miles apart.
eternity ensues|||Originally posted by GDMI
Talking about long road trips .. Three years ago I drove all the way from the east coast to the west coast .. (NC to CA) .. and man! .. I thought it took me an eternity to drive across Texas!
Over the christmas holidays I drove a plymoth breeze from Windsor Ontario Canada -> South Padre Island, Texas -> Orlando Florida -> Back to Windsor
Just under 7000 kilometers if I remember correctly.
Great time though!
Mike B|||Originally posted by Ruprect
northern florida absolutely sucks
normally when you drive you will guage how long you have to go by the # of exits
for example when you get on a highway and your directions say to get off the highway at exit 120 and you are at exit 100.
normally that should take no time
but in northern FL the exits are 20 to 30 miles apart.
eternity ensues The best I can come up with is "Well duh!"
Why put the exits closer together? Who would want to get off? If you think that the highway inhales vigorously, you should have tried getting off the highway somewhere in northern Florida!
-PatP
Insert Question
I have found a proc that reads as follows:
CREATE PROCEDURE InsertArea
@.AreaDescr as varchar(50)
as
insert into tblAreas
select isnull(max(AreaID)+1,1), @.AreaDescr
from tblAreas
GO
My question: tblAreas is a simple lookup table but why would I use this method of managing the AreaID instead of just inserting the record using an Identity column and being done with it? So I am confused on whether there is a hidden secret here or if the person wasn't aware of the Identity field = not knowing what they were doing? All of the tables in this database are like this as well, not just an isolated few.
Thanks for any input.
GregJust use an identity column, if you can. This kind of logic is leftover from either an old database engine, or an old database designer.|||Generally speaking, I agree with blindman. The only time I would use this type of process is when the "business logic" dictates that there can not be missing numbers, which as we all know can happen with an identity column.|||Any logic that relies upon the relative values of a surrogate key is not business logic. It's just bad logic.|||I respect your opinion. How would you populate a field that needed to increment by one, and could not skip numbers? Say, a check number field or something like that.|||How would you populate a field that needed to increment by one, and could not skip numbers? Say, a check number field or something like that.if that is indeed a requirement, then i would pre-populate the numbers manually
in other words, if you issue a cheque book with cheques numbered 001 through 150, then insert those numbers into (some) table and indicate that they have been issued but not disbursed|||if that is indeed a requirement, then i would pre-populate the numbers manually
in other words, if you issue a cheque book with cheques numbered 001 through 150, then insert those numbers into (some) table and indicate that they have been issued but not disbursed
Huh? I am not a big fan of identity either but can you explain the rationale for this for us lesser mortals. Pre-populate? How do you make this scalable? A really big table with empty records waiting to be filled in? Do you add so many empty records a day? What if you go over your set amount. It's early, am I misreading this?|||Huh? I am not a big fan of identity either but can you explain the rationale for this for us lesser mortals.please, sean, that's inflammatory language, let's not get into another flamefest
buddy asked a question, i told him one way how, and you're questioning why
it's not a "why" question, just accept the requirement to generate a sequence of numbers with absolutely no gaps, and let's see your suggestion for this|||I did not think it was inflamatory at all. I was asking a question.
~"The IT world can be harsh" - Rudy from the other day.
Any time you poke fun at me it can be justified. When I ask a legit question, it's a flame war.
And you did not answer me.
First I would ask this guy why it is a requirement that there are no missing numbers. I suspect deep down there is some suspect logic here. Then I would ask him how this works out if one of these rows are deleted. Does he repopulate this column and re-establish his relationships each time? The whole thing is a bad idea.|||I think it was more of an inphlegmatory statement.
I like Rudy's idea of prepopulating check numbers when they are issued, and then complete them when they are disbursed. Remember, even check numbers these days are not always sequential. Sometimes I write checks from two different sets alternately, and then there are counter-checks and electronic checks too.
But I also suspect, like Sean, that if there is a need for a gapless incrementing value then there is a flaw in the application concept.|||Get the hose...
Damn.
us mere mortals...Sean, you still smarting from the pad left with zeroes thread?
The answer to this thread is "Wake up and rework your model"
Plus the sproc he "found" is a poor construction to replace identity. It show lack of forsight in a multiuser environment, and can get hosed easily, and rather quickly.
And the check analogy, what if I cancel a check..bad analogy too, but is a "missing" number in a sequence, at least from the banks perspective.
And if you want a non-identity, Identity column
Try this
http://weblogs.sqlteam.com/brettk/archive/2004/06/29/1687.aspx|||And the check analogy, what if I cancel a check..bad analogy too, but is a "missing" number in a sequence, at least from the banks perspective.no, it isn't, it's an excellent example
and no, it isn't missing at all, it's just cancelled
it's the same situation joe celko refers to in several of his examples -- what did we do in the old days of pre-printed order forms, when you spilled coffee on an order form, you couldn't use that one, but you couldn't just destroy it, you have to account for that order number in the order number sequence somehow
just saying "rework the model" does not really address the business requirement adequately
rework it how, exactly?|||just saying "rework the model" does not really address the business requirement adequatelySo when the business requirement is flawed, then we fix it with code? I usually go back to the Business Analyst and send them back to the project sponsor to fix the underlying problem... I rarely fix a bad business process by adding code to it.
-PatP|||I rarely fix a bad business process by adding code to it.geez you guys are stubborn
okay, let's suppose neil diamond has personally autographed a series of 100 gold-plated cds of "margaritaville" and these are now up for sale
your job is to record who purchased each one
you need to assign a "record number" (pun partially intentional) numbered 001 through 100 to each of these discs
you're going to go back to the sponsor and tell them this is a bad business decision? tell them you can't assign a sequential number because it might have gaps and you don't know how to do handle them?
sheesh!!!|||Ok what happens when one of the records (pun partially intended again) turns out to be non readable and the purchaser rejects the record.|||then you update the record (heh) and mark it as non-readable or other such status
the point is, you cannot delete it, you must account for that number|||What if I don't use a sequential number and I use a GUID?
I think you're mixing apples and oranges here....
And if I was worried about the sequence of things, I think I'd rather use datetime
Has the original posters head exploded yet?
And remember, yesterday was hump day, so if you missed out, it's your own fault...|||What if I don't use a sequential number and I use a GUID?
I think you're mixing apples and oranges here....no, i think you are missing the point
your client says "i want these 100 discs numbered as 001 through 100"
and you come back with "okay, let's use a GUID for that!"
i feel like asking "have you ever actually worked with clients?" but that would be rude and demeaning
the point is not what you would rather use, the point is what would you actually use to record the numbers 001 through 100
i don't see why everybody's having such a hard time with that concept|||In this situation the number of the record would be field in the record or property of my object but it would not be my primary key or record identifier.|||In this situation the number of the record would be field in the record or property of my object but it would not be my primary key or record identifier.excellent answer, and now we are getting somewhere|||no, i think you are missing the point
your client says "i want these 100 discs numbered as 001 through 100"
and you come back with "okay, let's use a GUID for that!"
i feel like asking "have you ever actually worked with clients?" but that would be rude and demeaning
the point is not what you would rather use, the point is what would you actually use to record the numbers 001 through 100
i don't see why everybody's having such a hard time with that concept
Well you'd have to try harder to get me pissed off.
I beleive the discussion was about the relative offset of a number and it's relationship to the other numbers in the sequence. Which is meaningless.
What for example, does the fact that record 5 is before record 6, except that?
Or that record 7 was destroyed in a fire and no longer exist?
The numbers themselves hold no special meaning. And we all know this. It is about the number only that we are discussing, not the other stuff that you are alluding to, that would require other, specific information, which of course would have to be recorded in the database. Which is not what this thread is about.
It about the poster trying to derive valuable information, using a sequence number, alone, by itself, with no other facts. Period.
Damn hangover.|||In this situation the number of the record would be field in the record or property of my object but it would not be my primary key or record identifier.
The above plus...
Deny delete permissions on the table, use IDENTITY or SCOPE_IDENTITY or some nonsense to populate.
That is my final answer. Sorry for the wait, been busy writing documentation.|||Wow, this thread exploded overnight (my overnight anyway). I'm not the OP, but I did raise the issue of some applications requiring sequential numbers with no gaps. I'll avoid the debate about whether you should use that field as PK or not. The fact is, as anyone who has worked in an accounting, gaming or other environment that gets audited knows, there are situations that have this requirement.
In the earlier example of the records numbered 1-100 and one gets destroyed, it's already been pointed out that the destroyed record must still be accounted for. An auditor is going to come in and ask where number XX is, and you'd better be able to tell him, not just say "the system skipped that number". Missed numbers equal possible theft to an auditor.|||arggghhh. accounting. the natural enemy of every dba. buy something of the shelf like Great Pains I mean Great Plains, ACCPAC, MAS90... The rules... the pain... the spreadsheets... the humanity...|||The pile. The only natural enemy of the hole.
More out of curiosity than to prolong the conversation, do banks actually care if I void a check in order to set up a direct deposit? Or do they let that check go as MIA?|||While I can't cite the exact section and paragraph, under GAAP (http://cpaclass.com/gaap/gaap-us-01a.htm) rules any documents that need to be serialized must be serialized before use. In other words, it has to be done while the forms are blank, with no data on them at all. Once any data has been recorded on the documents, any serialization is unacceptable under GAAP (it might be convenient, it is certainly permitted, but not admissable for any kind of GAAP compliance).
I work with bean counters every day. I've had this stuff drilled into my very soul. If you are adding serial numbers after the form has been filled out (or worse yet already stored in the database), the auditors had better cite that as a GAAP failure on any opinion they issue.
-PatP|||More out of curiosity than to prolong the conversation, do banks actually care if I void a check in order to set up a direct deposit? Or do they let that check go as MIA?Most banks don't even insist that check numbers be unique or even provided. I don't know of any that care if checks are AWOL, although they often report missing checks for customer convenience.
-PatP|||While I can't cite the exact section and paragraph, under GAAP (http://cpaclass.com/gaap/gaap-us-01a.htm) rules any documents that need to be serialized must be serialized before use. In other words, it has to be done while the forms are blank, with no data on them at all. Once any data has been recorded on the documents, any serialization is unacceptable under GAAP (it might be convenient, it is certainly permitted, but not admissable for any kind of GAAP compliance).
I work with bean counters every day. I've had this stuff drilled into my very soul. If you are adding serial numbers after the form has been filled out (or worse yet already stored in the database), the auditors had better cite that as a GAAP failure on any opinion they issue.
-PatP
And who, pray tell, does this serialization?
Why not serialize on inception?
You don't pre-serialize a trade do you?
I trust what you say, but I don't understand, or have ever seen it, even with all the gov work I've done, or street work.|||Hi Guys
I'm not sure I want to get involved in this thread as some of you seem a little tetchy but ......
a) Purchase Order No's - Goods Despatch Numbers (GDN's) & Goods Receipt Numbers (GRN's) are often serialized for very good business reasons.
b) I have always found it annoying that new Developers automatically use Identity fields without thinking about the fact that Primary Keys should be used for LOGICAL uniqueness not just a method of achieving system Ref Integrity and trying to improve performance by using a single Int field, especially on small databases
ie. I say use a real world Logicaly unique composite primary key (even if they're VarChars) IF POSSIBLE.
c) An exclusive lock on the previous max(id) would prevent problems if you think 2 people may run this statement at the exact same time (takes milliseconds to run).
d) please don't shout at me it's all IMHO ;-)
GW|||When Auditors worry about serial numbers, they are looking to be able to guarantee something. Specifically, that all of the serialized items can be accounted for throughout an entire process.
In order to make that guarantee about forms, those forms need to be uniquely identifiable in some way that has nothing to do with the way the form is used. The most commonly accepted way to do this is to pre-print serial numbers on those forms.
If you know that a particular branch/office/business/whatever started with forms X through Y, you can ensure that all of the forms are either present or properly accounted for... If these numbers are added after the fact (once the forms have been under control of the party(s) you want to be able to make that guarantee about, the numbers are useless for the purpose of making the guarantee.
If you add the serialization after the data is transcribed (copied from the original form to another media such as a database), that makes the serial numbers even less useful because at that point they have no relationship to the physical (controlled) forms at all.
This would be a textbook case of a serial number that was meaningless from a GAAP standpoint.
-PatP|||Exactly
This is why I would use the preprinted Hardcopy GDN/GRN number as the PrimaryKey and ask the input clerk to enter it into the system (with a bit of business logic to check format etc).
Purchase Orders are usually generated by the system so can be generated at point of creation. As far as I concerned the PO does not exist till it's commited to the system, at point of commit it's serialized, i imagine this method then complys with GAAP.
GW|||I don't think that GAAP rules apply outside of the United States, so you probably don't need to worry about them. I know there is an organization like the AICPA in the United Kingdom (I think it is the SBA, but I'm not sure) and I'm sure that they have similar rules.
Because the PO numbers are generated for internal purposes only, there is no point in auditing them. Who would care if PO #1234567 was missing? That wouldn't hurt anyone that I can imagine.
An auditor only cares about the things that they need to prove or certify. They might only be engaged to prove one specific thing, or they might be engaged for a full systemic audit (an awful thing).
-PatP|||Let me pitch in with my two cents...
If you want a running sequence number without using identity column, there are two ways this can be achieved: 1) use max as said before 2) use a separate table to hold a value and increment it as required. E.g. if you have multiple such requirements, you can have each row containing a number. I know 'Blindman' will jump in and try to get me with something here. What the heck? There is no such thing as old style and new style. Use the one that is convenient as long as that does not hurt the system. By the way, Windows has lot of VAX VMS concepts built-in anyway.|||:o :o :o :o|||I know 'Blindman' will jump in and try to get me with something here.Of course. I can't let bad advice slip by, since beginners tend to rely upon the information they get from these forums.
There is no such thing as old style and new style. Sure there is. That's a really uninformed statement. If you insist there is no difference between old style and new style, you can "Goto" heck. And there is also a difference between good programming and sucky programming, and using either of the methods you suggest is both slow from a processing standpoint, and needlessly complex from a programming standpoint.|||Go back to your own advice. Reading BOL... If you do not understand then don't think that it is uninformed statement. You stated that database cannot be shrunk... Go and check the BOL under DBCC SHRINKFILE... By the way, before you jump and attack me, you better get your side straight. You talk about programming...
Of course. I can't let bad advice slip by, since beginners tend to rely upon the information they get from these forums.
Sure there is. That's a really uninformed statement. If you insist there is no difference between old style and new style, you can "Goto" heck. And there is also a difference between good programming and sucky programming, and using either of the methods you suggest is both slow from a processing standpoint, and needlessly complex from a programming standpoint.|||I never stated that databases can't be shrunk. I said they can't be shrunk below their original size. Books Online:
You cannot shrink an entire database to be smaller than its original size.
Therefore, if a database was created with a size of 10 megabytes (MB) and
grew to 100 MB, the smallest the database could be shrunk to, assuming all
the data in the database has been deleted, is 10 MB. You are a noob.|||Here is your famous BOL for your reference...
--
DBCC SHRINKFILE
Shrinks the size of the specified data file or log file for the related database.
Syntax
DBCC SHRINKFILE
( { file_name | file_id }
{ [ , target_size ]
| [ , { EMPTYFILE | NOTRUNCATE | TRUNCATEONLY } ]
}
)
Arguments
file_name
Is the logical name of the file shrunk. File names must conform to the rules for identifiers. For more information, see Using Identifiers.
file_id
Is the identification (ID) number of the file to be shrunk. To obtain a file ID, use the FILE_ID function or search sysfiles in the current database.
target_size
Is the desired size for the file in megabytes, expressed as an integer. If not specified, DBCC SHRINKFILE reduces the size to the default file size.
If target_size is specified, DBCC SHRINKFILE attempts to shrink the file to the specified size. Used pages in the part of the file to be freed are relocated to available free space in the part of the file retained. For example, if there is a 10-MB data file, a DBCC SHRINKFILE with a target_size of 8 causes all used pages in the last 2 MB of the file to be reallocated into any available free slots in the first 8 MB of the file. DBCC SHRINKFILE does not shrink a file past the size needed to store the data in the file. For example, if 7 MB of a 10-MB data file is used, a DBCC SHRINKFILE statement with a target_size of 6 shrinks the file to only 7 MB, not 6 MB.
--
Call yourself whatever you want and it can be shrunk below 10MB if required. Don't pat yourself. Will you?
I never stated that databases can't be shrunk. I said they can't be shrunk below their original size. Books Online:
You cannot shrink an entire database to be smaller than its original size.
Therefore, if a database was created with a size of 10 megabytes (MB) and
grew to 100 MB, the smallest the database could be shrunk to, assuming all
the data in the database has been deleted, is 10 MB. You are a noob.|||In this situation the number of the record would be field in the record or property of my object but it would not be my primary key or record identifier.
wtf, so now there are 2 keys ?
the number does happen to be unique and immutable, what 's the point in adding yet another key ?|||wtf, so now there are 2 keys ?
the number does happen to be unique and immutable, what 's the point in adding yet another key ?some people just cannot sleep at night unless they have assigned the key|||some people just cannot sleep at night unless they have assigned the key
Haha, and a table must have at least one identity!
Won't start that key debate.
Redundancy is meaningless.|||some people just cannot sleep at night unless they have assigned the key Guilty as charged... :(
CREATE PROCEDURE InsertArea
@.AreaDescr as varchar(50)
as
insert into tblAreas
select isnull(max(AreaID)+1,1), @.AreaDescr
from tblAreas
GO
My question: tblAreas is a simple lookup table but why would I use this method of managing the AreaID instead of just inserting the record using an Identity column and being done with it? So I am confused on whether there is a hidden secret here or if the person wasn't aware of the Identity field = not knowing what they were doing? All of the tables in this database are like this as well, not just an isolated few.
Thanks for any input.
GregJust use an identity column, if you can. This kind of logic is leftover from either an old database engine, or an old database designer.|||Generally speaking, I agree with blindman. The only time I would use this type of process is when the "business logic" dictates that there can not be missing numbers, which as we all know can happen with an identity column.|||Any logic that relies upon the relative values of a surrogate key is not business logic. It's just bad logic.|||I respect your opinion. How would you populate a field that needed to increment by one, and could not skip numbers? Say, a check number field or something like that.|||How would you populate a field that needed to increment by one, and could not skip numbers? Say, a check number field or something like that.if that is indeed a requirement, then i would pre-populate the numbers manually
in other words, if you issue a cheque book with cheques numbered 001 through 150, then insert those numbers into (some) table and indicate that they have been issued but not disbursed|||if that is indeed a requirement, then i would pre-populate the numbers manually
in other words, if you issue a cheque book with cheques numbered 001 through 150, then insert those numbers into (some) table and indicate that they have been issued but not disbursed
Huh? I am not a big fan of identity either but can you explain the rationale for this for us lesser mortals. Pre-populate? How do you make this scalable? A really big table with empty records waiting to be filled in? Do you add so many empty records a day? What if you go over your set amount. It's early, am I misreading this?|||Huh? I am not a big fan of identity either but can you explain the rationale for this for us lesser mortals.please, sean, that's inflammatory language, let's not get into another flamefest
buddy asked a question, i told him one way how, and you're questioning why
it's not a "why" question, just accept the requirement to generate a sequence of numbers with absolutely no gaps, and let's see your suggestion for this|||I did not think it was inflamatory at all. I was asking a question.
~"The IT world can be harsh" - Rudy from the other day.
Any time you poke fun at me it can be justified. When I ask a legit question, it's a flame war.
And you did not answer me.
First I would ask this guy why it is a requirement that there are no missing numbers. I suspect deep down there is some suspect logic here. Then I would ask him how this works out if one of these rows are deleted. Does he repopulate this column and re-establish his relationships each time? The whole thing is a bad idea.|||I think it was more of an inphlegmatory statement.
I like Rudy's idea of prepopulating check numbers when they are issued, and then complete them when they are disbursed. Remember, even check numbers these days are not always sequential. Sometimes I write checks from two different sets alternately, and then there are counter-checks and electronic checks too.
But I also suspect, like Sean, that if there is a need for a gapless incrementing value then there is a flaw in the application concept.|||Get the hose...
Damn.
us mere mortals...Sean, you still smarting from the pad left with zeroes thread?
The answer to this thread is "Wake up and rework your model"
Plus the sproc he "found" is a poor construction to replace identity. It show lack of forsight in a multiuser environment, and can get hosed easily, and rather quickly.
And the check analogy, what if I cancel a check..bad analogy too, but is a "missing" number in a sequence, at least from the banks perspective.
And if you want a non-identity, Identity column
Try this
http://weblogs.sqlteam.com/brettk/archive/2004/06/29/1687.aspx|||And the check analogy, what if I cancel a check..bad analogy too, but is a "missing" number in a sequence, at least from the banks perspective.no, it isn't, it's an excellent example
and no, it isn't missing at all, it's just cancelled
it's the same situation joe celko refers to in several of his examples -- what did we do in the old days of pre-printed order forms, when you spilled coffee on an order form, you couldn't use that one, but you couldn't just destroy it, you have to account for that order number in the order number sequence somehow
just saying "rework the model" does not really address the business requirement adequately
rework it how, exactly?|||just saying "rework the model" does not really address the business requirement adequatelySo when the business requirement is flawed, then we fix it with code? I usually go back to the Business Analyst and send them back to the project sponsor to fix the underlying problem... I rarely fix a bad business process by adding code to it.
-PatP|||I rarely fix a bad business process by adding code to it.geez you guys are stubborn
okay, let's suppose neil diamond has personally autographed a series of 100 gold-plated cds of "margaritaville" and these are now up for sale
your job is to record who purchased each one
you need to assign a "record number" (pun partially intentional) numbered 001 through 100 to each of these discs
you're going to go back to the sponsor and tell them this is a bad business decision? tell them you can't assign a sequential number because it might have gaps and you don't know how to do handle them?
sheesh!!!|||Ok what happens when one of the records (pun partially intended again) turns out to be non readable and the purchaser rejects the record.|||then you update the record (heh) and mark it as non-readable or other such status
the point is, you cannot delete it, you must account for that number|||What if I don't use a sequential number and I use a GUID?
I think you're mixing apples and oranges here....
And if I was worried about the sequence of things, I think I'd rather use datetime
Has the original posters head exploded yet?
And remember, yesterday was hump day, so if you missed out, it's your own fault...|||What if I don't use a sequential number and I use a GUID?
I think you're mixing apples and oranges here....no, i think you are missing the point
your client says "i want these 100 discs numbered as 001 through 100"
and you come back with "okay, let's use a GUID for that!"
i feel like asking "have you ever actually worked with clients?" but that would be rude and demeaning
the point is not what you would rather use, the point is what would you actually use to record the numbers 001 through 100
i don't see why everybody's having such a hard time with that concept|||In this situation the number of the record would be field in the record or property of my object but it would not be my primary key or record identifier.|||In this situation the number of the record would be field in the record or property of my object but it would not be my primary key or record identifier.excellent answer, and now we are getting somewhere|||no, i think you are missing the point
your client says "i want these 100 discs numbered as 001 through 100"
and you come back with "okay, let's use a GUID for that!"
i feel like asking "have you ever actually worked with clients?" but that would be rude and demeaning
the point is not what you would rather use, the point is what would you actually use to record the numbers 001 through 100
i don't see why everybody's having such a hard time with that concept
Well you'd have to try harder to get me pissed off.
I beleive the discussion was about the relative offset of a number and it's relationship to the other numbers in the sequence. Which is meaningless.
What for example, does the fact that record 5 is before record 6, except that?
Or that record 7 was destroyed in a fire and no longer exist?
The numbers themselves hold no special meaning. And we all know this. It is about the number only that we are discussing, not the other stuff that you are alluding to, that would require other, specific information, which of course would have to be recorded in the database. Which is not what this thread is about.
It about the poster trying to derive valuable information, using a sequence number, alone, by itself, with no other facts. Period.
Damn hangover.|||In this situation the number of the record would be field in the record or property of my object but it would not be my primary key or record identifier.
The above plus...
Deny delete permissions on the table, use IDENTITY or SCOPE_IDENTITY or some nonsense to populate.
That is my final answer. Sorry for the wait, been busy writing documentation.|||Wow, this thread exploded overnight (my overnight anyway). I'm not the OP, but I did raise the issue of some applications requiring sequential numbers with no gaps. I'll avoid the debate about whether you should use that field as PK or not. The fact is, as anyone who has worked in an accounting, gaming or other environment that gets audited knows, there are situations that have this requirement.
In the earlier example of the records numbered 1-100 and one gets destroyed, it's already been pointed out that the destroyed record must still be accounted for. An auditor is going to come in and ask where number XX is, and you'd better be able to tell him, not just say "the system skipped that number". Missed numbers equal possible theft to an auditor.|||arggghhh. accounting. the natural enemy of every dba. buy something of the shelf like Great Pains I mean Great Plains, ACCPAC, MAS90... The rules... the pain... the spreadsheets... the humanity...|||The pile. The only natural enemy of the hole.
More out of curiosity than to prolong the conversation, do banks actually care if I void a check in order to set up a direct deposit? Or do they let that check go as MIA?|||While I can't cite the exact section and paragraph, under GAAP (http://cpaclass.com/gaap/gaap-us-01a.htm) rules any documents that need to be serialized must be serialized before use. In other words, it has to be done while the forms are blank, with no data on them at all. Once any data has been recorded on the documents, any serialization is unacceptable under GAAP (it might be convenient, it is certainly permitted, but not admissable for any kind of GAAP compliance).
I work with bean counters every day. I've had this stuff drilled into my very soul. If you are adding serial numbers after the form has been filled out (or worse yet already stored in the database), the auditors had better cite that as a GAAP failure on any opinion they issue.
-PatP|||More out of curiosity than to prolong the conversation, do banks actually care if I void a check in order to set up a direct deposit? Or do they let that check go as MIA?Most banks don't even insist that check numbers be unique or even provided. I don't know of any that care if checks are AWOL, although they often report missing checks for customer convenience.
-PatP|||While I can't cite the exact section and paragraph, under GAAP (http://cpaclass.com/gaap/gaap-us-01a.htm) rules any documents that need to be serialized must be serialized before use. In other words, it has to be done while the forms are blank, with no data on them at all. Once any data has been recorded on the documents, any serialization is unacceptable under GAAP (it might be convenient, it is certainly permitted, but not admissable for any kind of GAAP compliance).
I work with bean counters every day. I've had this stuff drilled into my very soul. If you are adding serial numbers after the form has been filled out (or worse yet already stored in the database), the auditors had better cite that as a GAAP failure on any opinion they issue.
-PatP
And who, pray tell, does this serialization?
Why not serialize on inception?
You don't pre-serialize a trade do you?
I trust what you say, but I don't understand, or have ever seen it, even with all the gov work I've done, or street work.|||Hi Guys
I'm not sure I want to get involved in this thread as some of you seem a little tetchy but ......
a) Purchase Order No's - Goods Despatch Numbers (GDN's) & Goods Receipt Numbers (GRN's) are often serialized for very good business reasons.
b) I have always found it annoying that new Developers automatically use Identity fields without thinking about the fact that Primary Keys should be used for LOGICAL uniqueness not just a method of achieving system Ref Integrity and trying to improve performance by using a single Int field, especially on small databases
ie. I say use a real world Logicaly unique composite primary key (even if they're VarChars) IF POSSIBLE.
c) An exclusive lock on the previous max(id) would prevent problems if you think 2 people may run this statement at the exact same time (takes milliseconds to run).
d) please don't shout at me it's all IMHO ;-)
GW|||When Auditors worry about serial numbers, they are looking to be able to guarantee something. Specifically, that all of the serialized items can be accounted for throughout an entire process.
In order to make that guarantee about forms, those forms need to be uniquely identifiable in some way that has nothing to do with the way the form is used. The most commonly accepted way to do this is to pre-print serial numbers on those forms.
If you know that a particular branch/office/business/whatever started with forms X through Y, you can ensure that all of the forms are either present or properly accounted for... If these numbers are added after the fact (once the forms have been under control of the party(s) you want to be able to make that guarantee about, the numbers are useless for the purpose of making the guarantee.
If you add the serialization after the data is transcribed (copied from the original form to another media such as a database), that makes the serial numbers even less useful because at that point they have no relationship to the physical (controlled) forms at all.
This would be a textbook case of a serial number that was meaningless from a GAAP standpoint.
-PatP|||Exactly
This is why I would use the preprinted Hardcopy GDN/GRN number as the PrimaryKey and ask the input clerk to enter it into the system (with a bit of business logic to check format etc).
Purchase Orders are usually generated by the system so can be generated at point of creation. As far as I concerned the PO does not exist till it's commited to the system, at point of commit it's serialized, i imagine this method then complys with GAAP.
GW|||I don't think that GAAP rules apply outside of the United States, so you probably don't need to worry about them. I know there is an organization like the AICPA in the United Kingdom (I think it is the SBA, but I'm not sure) and I'm sure that they have similar rules.
Because the PO numbers are generated for internal purposes only, there is no point in auditing them. Who would care if PO #1234567 was missing? That wouldn't hurt anyone that I can imagine.
An auditor only cares about the things that they need to prove or certify. They might only be engaged to prove one specific thing, or they might be engaged for a full systemic audit (an awful thing).
-PatP|||Let me pitch in with my two cents...
If you want a running sequence number without using identity column, there are two ways this can be achieved: 1) use max as said before 2) use a separate table to hold a value and increment it as required. E.g. if you have multiple such requirements, you can have each row containing a number. I know 'Blindman' will jump in and try to get me with something here. What the heck? There is no such thing as old style and new style. Use the one that is convenient as long as that does not hurt the system. By the way, Windows has lot of VAX VMS concepts built-in anyway.|||:o :o :o :o|||I know 'Blindman' will jump in and try to get me with something here.Of course. I can't let bad advice slip by, since beginners tend to rely upon the information they get from these forums.
There is no such thing as old style and new style. Sure there is. That's a really uninformed statement. If you insist there is no difference between old style and new style, you can "Goto" heck. And there is also a difference between good programming and sucky programming, and using either of the methods you suggest is both slow from a processing standpoint, and needlessly complex from a programming standpoint.|||Go back to your own advice. Reading BOL... If you do not understand then don't think that it is uninformed statement. You stated that database cannot be shrunk... Go and check the BOL under DBCC SHRINKFILE... By the way, before you jump and attack me, you better get your side straight. You talk about programming...
Of course. I can't let bad advice slip by, since beginners tend to rely upon the information they get from these forums.
Sure there is. That's a really uninformed statement. If you insist there is no difference between old style and new style, you can "Goto" heck. And there is also a difference between good programming and sucky programming, and using either of the methods you suggest is both slow from a processing standpoint, and needlessly complex from a programming standpoint.|||I never stated that databases can't be shrunk. I said they can't be shrunk below their original size. Books Online:
You cannot shrink an entire database to be smaller than its original size.
Therefore, if a database was created with a size of 10 megabytes (MB) and
grew to 100 MB, the smallest the database could be shrunk to, assuming all
the data in the database has been deleted, is 10 MB. You are a noob.|||Here is your famous BOL for your reference...
--
DBCC SHRINKFILE
Shrinks the size of the specified data file or log file for the related database.
Syntax
DBCC SHRINKFILE
( { file_name | file_id }
{ [ , target_size ]
| [ , { EMPTYFILE | NOTRUNCATE | TRUNCATEONLY } ]
}
)
Arguments
file_name
Is the logical name of the file shrunk. File names must conform to the rules for identifiers. For more information, see Using Identifiers.
file_id
Is the identification (ID) number of the file to be shrunk. To obtain a file ID, use the FILE_ID function or search sysfiles in the current database.
target_size
Is the desired size for the file in megabytes, expressed as an integer. If not specified, DBCC SHRINKFILE reduces the size to the default file size.
If target_size is specified, DBCC SHRINKFILE attempts to shrink the file to the specified size. Used pages in the part of the file to be freed are relocated to available free space in the part of the file retained. For example, if there is a 10-MB data file, a DBCC SHRINKFILE with a target_size of 8 causes all used pages in the last 2 MB of the file to be reallocated into any available free slots in the first 8 MB of the file. DBCC SHRINKFILE does not shrink a file past the size needed to store the data in the file. For example, if 7 MB of a 10-MB data file is used, a DBCC SHRINKFILE statement with a target_size of 6 shrinks the file to only 7 MB, not 6 MB.
--
Call yourself whatever you want and it can be shrunk below 10MB if required. Don't pat yourself. Will you?
I never stated that databases can't be shrunk. I said they can't be shrunk below their original size. Books Online:
You cannot shrink an entire database to be smaller than its original size.
Therefore, if a database was created with a size of 10 megabytes (MB) and
grew to 100 MB, the smallest the database could be shrunk to, assuming all
the data in the database has been deleted, is 10 MB. You are a noob.|||In this situation the number of the record would be field in the record or property of my object but it would not be my primary key or record identifier.
wtf, so now there are 2 keys ?
the number does happen to be unique and immutable, what 's the point in adding yet another key ?|||wtf, so now there are 2 keys ?
the number does happen to be unique and immutable, what 's the point in adding yet another key ?some people just cannot sleep at night unless they have assigned the key|||some people just cannot sleep at night unless they have assigned the key
Haha, and a table must have at least one identity!
Won't start that key debate.
Redundancy is meaningless.|||some people just cannot sleep at night unless they have assigned the key Guilty as charged... :(
Sunday, February 19, 2012
INSERT performance
Hi guys,
Could someone please help me identify the botleneck in my case.
I have a table that includes one varchar(100) field and about 10 integer,
date, and boolean fields. The varchar column and two date columns are
indexed. The table includes about 1M of rows. No triggers, one foreign key
to a table with about 500 records.
When I insert about 50 records calling a stored procedure in a loop it
takes 30-60 second. During this time SQL takes 100% of CPU (AMD Athlone
2800) and makes lots of HDD reading requests. The database itself is quite
small and I guess is completely located in RAM, however I am not sure what
it reads from HDD in this case.
Is this performance normal? I guess I can improve it writing to those 50
records to a temporary table and then copying data to the main table. Am I
right? Are there any other ways to improve the insert performance?
Thanks,
Mike
Message posted via http://www.sqlmonster.com
Mike via SQLMonster.com wrote:
> Hi guys,
> Could someone please help me identify the botleneck in my case.
> I have a table that includes one varchar(100) field and about 10
> integer, date, and boolean fields. The varchar column and two date
> columns are indexed. The table includes about 1M of rows. No
> triggers, one foreign key to a table with about 500 records.
> When I insert about 50 records calling a stored procedure in a loop it
> takes 30-60 second. During this time SQL takes 100% of CPU (AMD
> Athlone 2800) and makes lots of HDD reading requests. The database
> itself is quite small and I guess is completely located in RAM,
> however I am not sure what it reads from HDD in this case.
> Is this performance normal? I guess I can improve it writing to those
> 50 records to a temporary table and then copying data to the main
> table. Am I right? Are there any other ways to improve the insert
> performance?
> Thanks,
> Mike
Take a look at the execution plan for the insert and see what's going
on. You can see a graphical plan in Query Analyzer, or use SET
STATISTICS PROFILE ON/OFF before and after the SP call.
David Gugick
Imceda Software
www.imceda.com
|||It seems like DBCC DBREINDEX solved the problem.
Message posted via http://www.sqlmonster.com
Could someone please help me identify the botleneck in my case.
I have a table that includes one varchar(100) field and about 10 integer,
date, and boolean fields. The varchar column and two date columns are
indexed. The table includes about 1M of rows. No triggers, one foreign key
to a table with about 500 records.
When I insert about 50 records calling a stored procedure in a loop it
takes 30-60 second. During this time SQL takes 100% of CPU (AMD Athlone
2800) and makes lots of HDD reading requests. The database itself is quite
small and I guess is completely located in RAM, however I am not sure what
it reads from HDD in this case.
Is this performance normal? I guess I can improve it writing to those 50
records to a temporary table and then copying data to the main table. Am I
right? Are there any other ways to improve the insert performance?
Thanks,
Mike
Message posted via http://www.sqlmonster.com
Mike via SQLMonster.com wrote:
> Hi guys,
> Could someone please help me identify the botleneck in my case.
> I have a table that includes one varchar(100) field and about 10
> integer, date, and boolean fields. The varchar column and two date
> columns are indexed. The table includes about 1M of rows. No
> triggers, one foreign key to a table with about 500 records.
> When I insert about 50 records calling a stored procedure in a loop it
> takes 30-60 second. During this time SQL takes 100% of CPU (AMD
> Athlone 2800) and makes lots of HDD reading requests. The database
> itself is quite small and I guess is completely located in RAM,
> however I am not sure what it reads from HDD in this case.
> Is this performance normal? I guess I can improve it writing to those
> 50 records to a temporary table and then copying data to the main
> table. Am I right? Are there any other ways to improve the insert
> performance?
> Thanks,
> Mike
Take a look at the execution plan for the insert and see what's going
on. You can see a graphical plan in Query Analyzer, or use SET
STATISTICS PROFILE ON/OFF before and after the SP call.
David Gugick
Imceda Software
www.imceda.com
|||It seems like DBCC DBREINDEX solved the problem.
Message posted via http://www.sqlmonster.com
INSERT performance
Hi guys,
Could someone please help me identify the botleneck in my case.
I have a table that includes one varchar(100) field and about 10 integer,
date, and boolean fields. The varchar column and two date columns are
indexed. The table includes about 1M of rows. No triggers, one foreign key
to a table with about 500 records.
When I insert about 50 records calling a stored procedure in a loop it
takes 30-60 second. During this time SQL takes 100% of CPU (AMD Athlone
2800) and makes lots of HDD reading requests. The database itself is quite
small and I guess is completely located in RAM, however I am not sure what
it reads from HDD in this case.
Is this performance normal? I guess I can improve it writing to those 50
records to a temporary table and then copying data to the main table. Am I
right? Are there any other ways to improve the insert performance?
Thanks,
Mike
Message posted via http://www.droptable.comMike via droptable.com wrote:
> Hi guys,
> Could someone please help me identify the botleneck in my case.
> I have a table that includes one varchar(100) field and about 10
> integer, date, and boolean fields. The varchar column and two date
> columns are indexed. The table includes about 1M of rows. No
> triggers, one foreign key to a table with about 500 records.
> When I insert about 50 records calling a stored procedure in a loop it
> takes 30-60 second. During this time SQL takes 100% of CPU (AMD
> Athlone 2800) and makes lots of HDD reading requests. The database
> itself is quite small and I guess is completely located in RAM,
> however I am not sure what it reads from HDD in this case.
> Is this performance normal? I guess I can improve it writing to those
> 50 records to a temporary table and then copying data to the main
> table. Am I right? Are there any other ways to improve the insert
> performance?
> Thanks,
> Mike
Take a look at the execution plan for the insert and see what's going
on. You can see a graphical plan in Query Analyzer, or use SET
STATISTICS PROFILE ON/OFF before and after the SP call.
David Gugick
Imceda Software
www.imceda.com|||It seems like DBCC DBREINDEX solved the problem.
Message posted via http://www.droptable.com
Could someone please help me identify the botleneck in my case.
I have a table that includes one varchar(100) field and about 10 integer,
date, and boolean fields. The varchar column and two date columns are
indexed. The table includes about 1M of rows. No triggers, one foreign key
to a table with about 500 records.
When I insert about 50 records calling a stored procedure in a loop it
takes 30-60 second. During this time SQL takes 100% of CPU (AMD Athlone
2800) and makes lots of HDD reading requests. The database itself is quite
small and I guess is completely located in RAM, however I am not sure what
it reads from HDD in this case.
Is this performance normal? I guess I can improve it writing to those 50
records to a temporary table and then copying data to the main table. Am I
right? Are there any other ways to improve the insert performance?
Thanks,
Mike
Message posted via http://www.droptable.comMike via droptable.com wrote:
> Hi guys,
> Could someone please help me identify the botleneck in my case.
> I have a table that includes one varchar(100) field and about 10
> integer, date, and boolean fields. The varchar column and two date
> columns are indexed. The table includes about 1M of rows. No
> triggers, one foreign key to a table with about 500 records.
> When I insert about 50 records calling a stored procedure in a loop it
> takes 30-60 second. During this time SQL takes 100% of CPU (AMD
> Athlone 2800) and makes lots of HDD reading requests. The database
> itself is quite small and I guess is completely located in RAM,
> however I am not sure what it reads from HDD in this case.
> Is this performance normal? I guess I can improve it writing to those
> 50 records to a temporary table and then copying data to the main
> table. Am I right? Are there any other ways to improve the insert
> performance?
> Thanks,
> Mike
Take a look at the execution plan for the insert and see what's going
on. You can see a graphical plan in Query Analyzer, or use SET
STATISTICS PROFILE ON/OFF before and after the SP call.
David Gugick
Imceda Software
www.imceda.com|||It seems like DBCC DBREINDEX solved the problem.
Message posted via http://www.droptable.com
INSERT performance
Hi guys,
Could someone please help me identify the botleneck in my case.
I have a table that includes one varchar(100) field and about 10 integer,
date, and boolean fields. The varchar column and two date columns are
indexed. The table includes about 1M of rows. No triggers, one foreign key
to a table with about 500 records.
When I insert about 50 records calling a stored procedure in a loop it
takes 30-60 second. During this time SQL takes 100% of CPU (AMD Athlone
2800) and makes lots of HDD reading requests. The database itself is quite
small and I guess is completely located in RAM, however I am not sure what
it reads from HDD in this case.
Is this performance normal? I guess I can improve it writing to those 50
records to a temporary table and then copying data to the main table. Am I
right? Are there any other ways to improve the insert performance?
Thanks,
Mike
--
Message posted via http://www.sqlmonster.comMike via SQLMonster.com wrote:
> Hi guys,
> Could someone please help me identify the botleneck in my case.
> I have a table that includes one varchar(100) field and about 10
> integer, date, and boolean fields. The varchar column and two date
> columns are indexed. The table includes about 1M of rows. No
> triggers, one foreign key to a table with about 500 records.
> When I insert about 50 records calling a stored procedure in a loop it
> takes 30-60 second. During this time SQL takes 100% of CPU (AMD
> Athlone 2800) and makes lots of HDD reading requests. The database
> itself is quite small and I guess is completely located in RAM,
> however I am not sure what it reads from HDD in this case.
> Is this performance normal? I guess I can improve it writing to those
> 50 records to a temporary table and then copying data to the main
> table. Am I right? Are there any other ways to improve the insert
> performance?
> Thanks,
> Mike
Take a look at the execution plan for the insert and see what's going
on. You can see a graphical plan in Query Analyzer, or use SET
STATISTICS PROFILE ON/OFF before and after the SP call.
--
David Gugick
Imceda Software
www.imceda.com|||It seems like DBCC DBREINDEX solved the problem.
--
Message posted via http://www.sqlmonster.com
Could someone please help me identify the botleneck in my case.
I have a table that includes one varchar(100) field and about 10 integer,
date, and boolean fields. The varchar column and two date columns are
indexed. The table includes about 1M of rows. No triggers, one foreign key
to a table with about 500 records.
When I insert about 50 records calling a stored procedure in a loop it
takes 30-60 second. During this time SQL takes 100% of CPU (AMD Athlone
2800) and makes lots of HDD reading requests. The database itself is quite
small and I guess is completely located in RAM, however I am not sure what
it reads from HDD in this case.
Is this performance normal? I guess I can improve it writing to those 50
records to a temporary table and then copying data to the main table. Am I
right? Are there any other ways to improve the insert performance?
Thanks,
Mike
--
Message posted via http://www.sqlmonster.comMike via SQLMonster.com wrote:
> Hi guys,
> Could someone please help me identify the botleneck in my case.
> I have a table that includes one varchar(100) field and about 10
> integer, date, and boolean fields. The varchar column and two date
> columns are indexed. The table includes about 1M of rows. No
> triggers, one foreign key to a table with about 500 records.
> When I insert about 50 records calling a stored procedure in a loop it
> takes 30-60 second. During this time SQL takes 100% of CPU (AMD
> Athlone 2800) and makes lots of HDD reading requests. The database
> itself is quite small and I guess is completely located in RAM,
> however I am not sure what it reads from HDD in this case.
> Is this performance normal? I guess I can improve it writing to those
> 50 records to a temporary table and then copying data to the main
> table. Am I right? Are there any other ways to improve the insert
> performance?
> Thanks,
> Mike
Take a look at the execution plan for the insert and see what's going
on. You can see a graphical plan in Query Analyzer, or use SET
STATISTICS PROFILE ON/OFF before and after the SP call.
--
David Gugick
Imceda Software
www.imceda.com|||It seems like DBCC DBREINDEX solved the problem.
--
Message posted via http://www.sqlmonster.com
Subscribe to:
Comments (Atom)