Hi All,
I am upgrading from one server to another and somehow find that all performa
nce has improved (as expected) EXCEPT for a simple repeated insert contained
in a while loop, to insert 5 million records, one at a time, into an empty
table. This was one of many
queries I wrote to benchmark performance.
In general, the servers are the same, except the new server has slightly bet
ter hardware specs. The primary difference is the new server is clustered. T
he new server also has fibre-channel drives vs. SCSI drives. The new server
has a SAN configured as RAI
D 0+1, vs RAID 5. The new server has 4 GB of RAM vs. 2 GB RAM. So it should
be faster in every respect, and in most cases, is.
Both servers have dual Xeon 2.8 GHz hyper-threading processors, so they appe
ar as 4 processors and SQL Server is configured to use them all.
And one other difference, the new server is running Windows 2003 vs. Windows
2000.
Both servers are updated with service packs and all that. They both perform
fine, and I get 4:1 performance gain with the new clustered server on most s
ample queries, except for this one query, which is below.
Anyone got any ideas why this would be? I want to go into production with th
e new clustered server, but this one performance degrade is a little troubli
ng.
Also, one last thing, the Current Disk Queue Length and Processor Queue Leng
th are both around 1 (Total). Pages/sec on the new server is 0. So there don
't seem to be I/O bottlenecks (on either machine). But when I measure SQL Tr
ansactions/sec on each serv
er, the new server runs at 165 SQL Trans/sec and takes 8 hours to finish the
job, and the old server runs at 444 SQL trans/sec and takes about 3 hours t
o finish the job. Very strange.
Regards,
Andre Chan
set nocount on
go
use test
go
drop table table1
go
create table table1 (
tableID int IDENTITY(1,1) PRIMARY KEY CLUSTERED,
counter int,
datestamp datetime,
dateString varchar(30)
) on test_data_001
go
declare @.now datetime
select @.now = getdate()
declare @.counter int
select @.counter = 1
while @.counter <= 5000000
begin
select @.now = getdate()
insert table1 (counter, datestamp, dateString)
select @.counter, @.now, convert(varchar(30), @.now, 121)
if
10000000.0 / @.counter = 1.000 or
10000000.0 / @.counter = 1.250 or
10000000.0 / @.counter = 2.000 or
10000000.0 / @.counter = 2.500 or
10000000.0 / @.counter = 4.000 or
10000000.0 / @.counter = 5.000 or
10000000.0 / @.counter = 8.000 or
10000000.0 / @.counter = 10.000 or
10000000.0 / @.counter = 12.500 or
10000000.0 / @.counter = 20.000 or
10000000.0 / @.counter = 25.000 or
10000000.0 / @.counter = 40.000 or
10000000.0 / @.counter = 50.000 or
10000000.0 / @.counter = 80.000 or
10000000.0 / @.counter = 100.000 or
10000000.0 / @.counter = 125.000 or
10000000.0 / @.counter = 200.000 or
10000000.0 / @.counter = 250.000 or
10000000.0 / @.counter = 400.000 or
10000000.0 / @.counter = 500.000 or
10000000.0 / @.counter = 800.000 or
10000000.0 / @.counter = 1000.000 or
10000000.0 / @.counter = 1250.000 or
10000000.0 / @.counter = 2000.000 or
10000000.0 / @.counter = 2500.000 or
10000000.0 / @.counter = 5000.000 or
10000000.0 / @.counter = 8000.000 or
10000000.0 / @.counter = 10000.000 or
10000000.0 / @.counter = 12500.000 or
10000000.0 / @.counter = 20000.000 or
10000000.0 / @.counter = 25000.000 or
10000000.0 / @.counter = 50000.000 or
10000000.0 / @.counter = 100000.000 or
(@.counter in (600000, 700000, 900000)) or
(@.counter in (650000, 750000, 950000)) or
(@.counter in (6000000, 7000000, 9000000)) or
(@.counter in (6500000, 7500000, 9500000))
begin
print 'Current record is '
+ cast(@.counter as varchar(10))
+ ' inserted at '
+ convert(varchar(30), @.now, 121)
end
select @.counter = @.counter + 1
end
go
set nocount off
go"Andre Chan" <anonymous@.discussions.microsoft.com> wrote in message
news:DC138DC0-561D-42E3-B653-97C763777D99@.microsoft.com...
> Hi All,
> I am upgrading from one server to another and somehow find that all
performance has improved (as expected) EXCEPT for a simple repeated insert
contained in a while loop, to insert 5 million records, one at a time, into
an empty table. This was one of many queries I wrote to benchmark
performance.
> In general, the servers are the same, except the new server has slightly
better hardware specs. The primary difference is the new server is
clustered. The new server also has fibre-channel drives vs. SCSI drives. The
new server has a SAN configured as RAID 0+1, vs RAID 5. The new server has 4
GB of RAM vs. 2 GB RAM. So it should be faster in every respect, and in most
cases, is.
> Both servers have dual Xeon 2.8 GHz hyper-threading processors, so they
appear as 4 processors and SQL Server is configured to use them all.
> And one other difference, the new server is running Windows 2003 vs.
Windows 2000.
> Both servers are updated with service packs and all that. They both
perform fine, and I get 4:1 performance gain with the new clustered server
on most sample queries, except for this one query, which is below.
> Anyone got any ideas why this would be? I want to go into production with
the new clustered server, but this one performance degrade is a little
troubling.
> Also, one last thing, the Current Disk Queue Length and Processor Queue
Length are both around 1 (Total). >Pages/sec on the new server is 0. So
there don't seem to be I/O bottlenecks (on either machine). But when I
>measure SQL Transactions/sec on each server, the new server runs at 165 SQL
Trans/sec and takes 8 hours to >finish the job, and the old server runs at
444 SQL trans/sec and takes about 3 hours to finish the job. Very strange.
>
In your loop there is no explicit transaction so each insert will update the
data page, probably just in the cache, update the log and flush the log to
disk. Of these, the log flush is by far the most expensive, and so that's
probably where the difference is. Each of your log flush is a small IO
request, and I have heard that SCSI can outperform Fibre Channel for
handling lots of small IO requests.
With more users on the system there would be more bytes in the log to flush
each time and your Fibre Channel solution would probably regain its lead
over SCSI because of its higher transfer speeds.
David|||Hi David,
Thanks for the response. The lack of transaction block is intentional, becau
se I wanted each insert write to be discrete. I was quite surprised to find
this particular difference. Your point about log flush is interesting, but w
ouldn't I see some spike in
Disk Queue Length? The RAID 0+1 stripe is across 7 disks mirrored so 14 tota
l. All data, log, and tempdb is on this single large volume. Here is one mor
e difference I did not mention: The new cluster is using Veritas Volume Mana
ger as its (software) RAID
controller, while the old server had a hardware RAID controller. This is not
my idea, the guys at the data center insist there is marginal performance d
ifference. I disagree, and I am trying to prove it (or disprove it). Neither
box has dedicated CPU or
memory cache for the disk array; it is just RAID enclosure.
Does anyone have an opinion on software RAID? It will take us a few months t
o get out of using it, but our web traffic at the moment is low enough that
it is OK for now.
So far, it seems single large transactions, such as creating a new 100 GB da
tabase, or updating 5 million rows, or creating a clustered index, seem to w
ork faster on the new server, by factor ranging from 2:1 to 4:1.
I agree with your point that more users will change the statistics greatly.
I will re-write some SQL tests to simulate multiple client connections execu
ting concurrent reads and writes.
Thanks,
Andre Chan|||"Andre Chan" <anonymous@.discussions.microsoft.com> wrote in message
news:A1721F27-13F0-4E58-A980-A4E54CEF65F9@.microsoft.com...
> Hi David,
> Thanks for the response. The lack of transaction block is intentional,
because I wanted each insert write to be >discrete. I was quite surprised to
find this particular difference. Your point about log flush is interesting,
but wouldn't >I see some spike in Disk Queue Length?
No. In your test there's only one thread doing all the log flushes. So
there aren't a lot of different threads waiting for the disk.
. . .
> I agree with your point that more users will change the statistics
greatly. I will re-write some SQL tests to simulate >multiple client
connections executing concurrent reads and writes.
That's a good idea. Just run your same test in several Query Analyzer
sessions at once. That would be a better test anyway.
Davidsql
Monday, March 26, 2012
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment