Slow Datafile Creation on Windows Sort-of Solved
While struggling with some issues surrounding i/o performance on my non-representative foreign script kiddy development hardware (WinXP Pro, 1.5Gb, erm ... some disks and other stuff) I found myself creating and dropping tablespaces on various combinations of RAID0, single disk, cooked and raw disks. Tablespace creation performance was terrible -- bursts of 4Mb/sec write activity followed by stretches of nothing that averaged things out to 1Mb/sec.
Now, while investigating the impact of db_file_multiblock_read_count on read performance I noticed that performance went to hell-in-a-hand-basket when the multiblock read size reached 1Mb. Which is curious, because that's the maximum size that Windows will permit (based on level 8 10046 tracing).
So thinking that sauce for the goose may be sauce for the gander, I found a hidden parameter _db_file_direct_io_count which was coincidentally set to 1Mb. Dropping it down to 512kb had no effect, so I dropped it further to 128kb.
Lo! and the disks did perform. Well, sort of. They're writing at 20Mb/sec -- nothing to write home about, but not as bad as before I guess. Although for two WD740GD 10,000rpm SATA disks, which WD's own utility will read at 100Mb/sec combined, it's a bit off the pace.
Now, if only I could work out why my two disk RAID0 array is no faster than a single disk ... hmmm.
11 Comments:
Hmm, lots of variables involved. I don't do Windoz, so I may be totally off base, but some things to think about:
1. Is async I/O on? If so, is your RAID buffer big enough to handle the writes you are sending to it. I would guess from your initial test that your buffer is being filled and then flushed before you are done writing the file.
2. Can your backplane support continuous 100Mbps transfer rates? Where are they getting this rate anyway?
But, but, but... you are not supposed to use undocumented stuff. Bad, bad, bad...
Jeff,
Is async i/o on?
Not a flippin' clue. I expect so, but I'll investigate.
is your RAID buffer big enough to handle the writes you are sending to it
Erm ... I don't think it has one. http://www.promise.com/marketing/datasheet/file/FTS150_DS_10-17-03.pdf
Can your backplane support continuous 100Mbps transfer rates?
With two instances of WD's test utility running I can get 100MB/sec, so I guess so.
anon,
On a development instance, to investigate a performance problem? Don't see a problem, but maybe you weren't serious?
How did you modify the hidden parm?
Was it the KSPPITY field from
x$ksppi?
Thank you
I forget which system table I queried to see the value, but that sounds like it ... don't go updating it though, will you?
ALTER SYSTEM SET "_db_file_direct_io_count" = 131072 scope=spfile;
... then shutdown/startup. It can't be dynamically modified on a running system, apparantly.
Hmm, 32 bit bus at 66Mhz has a peak burst rate of 266Mbps and a sustained of about 50Mbps. Too bad you're not running Linux you could tell how busy each disk was.
Async: windows don't do it, sorry.
I found this RAID-on-PCI article which mentions a 133MB/sec bandwidth -- it seems to sustain 100MB/sec ok. I don't know much about this PCI business, though.
http://www.tweakers.net/reviews/432
I'm not sure about the asynch thing -- apparanrtly NT and 2000 do it, not sure about XP
async and directIO will both be being used on NTFS under windows.
I haven't actually done this test (datafile creation speed never bothered me and I don't set autoextend) but you should experiment with varying the OS block size (sector size I think its called when formatting) you don't *have* to use 512 bytes if you don't want to.
In addition make sure that the disk you are using for the datafiles is properly defragmented first (or has never been used).
Niall,
Actually I should have mentioned that after testing this on NTFS I retested on a raw device, so that all the messy file system stuff could be bypassed. Didn't see any convincing performace improvement, though.
Axr,
I used perfmon, so I got nice little charts plotted.
Post a Comment
<< Home