.comment-link {margin-left:.6em;}

Oracle Sponge -- Now Moved To Wordpress

Please use http://oraclesponge.wordpress.com

Tuesday, May 24, 2005

Slow Datafile Creation on Windows Sort-of Solved

While struggling with some issues surrounding i/o performance on my non-representative foreign script kiddy development hardware (WinXP Pro, 1.5Gb, erm ... some disks and other stuff) I found myself creating and dropping tablespaces on various combinations of RAID0, single disk, cooked and raw disks. Tablespace creation performance was terrible -- bursts of 4Mb/sec write activity followed by stretches of nothing that averaged things out to 1Mb/sec.

Now, while investigating the impact of db_file_multiblock_read_count on read performance I noticed that performance went to hell-in-a-hand-basket when the multiblock read size reached 1Mb. Which is curious, because that's the maximum size that Windows will permit (based on level 8 10046 tracing).

So thinking that sauce for the goose may be sauce for the gander, I found a hidden parameter _db_file_direct_io_count which was coincidentally set to 1Mb. Dropping it down to 512kb had no effect, so I dropped it further to 128kb.

Lo! and the disks did perform. Well, sort of. They're writing at 20Mb/sec -- nothing to write home about, but not as bad as before I guess. Although for two WD740GD 10,000rpm SATA disks, which WD's own utility will read at 100Mb/sec combined, it's a bit off the pace.

Now, if only I could work out why my two disk RAID0 array is no faster than a single disk ... hmmm.

11 Comments:

At 7:17 AM, Blogger Jeff Hunter said...

Hmm, lots of variables involved. I don't do Windoz, so I may be totally off base, but some things to think about:
1. Is async I/O on? If so, is your RAID buffer big enough to handle the writes you are sending to it. I would guess from your initial test that your buffer is being filled and then flushed before you are done writing the file.
2. Can your backplane support continuous 100Mbps transfer rates? Where are they getting this rate anyway?

 
At 8:28 AM, Anonymous Anonymous said...

But, but, but... you are not supposed to use undocumented stuff. Bad, bad, bad...

 
At 8:37 AM, Blogger David Aldridge said...

Jeff,

Is async i/o on?
Not a flippin' clue. I expect so, but I'll investigate.

is your RAID buffer big enough to handle the writes you are sending to it
Erm ... I don't think it has one. http://www.promise.com/marketing/datasheet/file/FTS150_DS_10-17-03.pdf

Can your backplane support continuous 100Mbps transfer rates?
With two instances of WD's test utility running I can get 100MB/sec, so I guess so.

 
At 8:38 AM, Blogger David Aldridge said...

anon,

On a development instance, to investigate a performance problem? Don't see a problem, but maybe you weren't serious?

 
At 3:39 PM, Anonymous Anonymous said...

How did you modify the hidden parm?
Was it the KSPPITY field from
x$ksppi?

Thank you

 
At 4:12 PM, Blogger David Aldridge said...

I forget which system table I queried to see the value, but that sounds like it ... don't go updating it though, will you?

ALTER SYSTEM SET "_db_file_direct_io_count" = 131072 scope=spfile;

... then shutdown/startup. It can't be dynamically modified on a running system, apparantly.

 
At 6:36 PM, Blogger Jeff Hunter said...

Hmm, 32 bit bus at 66Mhz has a peak burst rate of 266Mbps and a sustained of about 50Mbps. Too bad you're not running Linux you could tell how busy each disk was.

Async: windows don't do it, sorry.

 
At 11:05 PM, Blogger David Aldridge said...

I found this RAID-on-PCI article which mentions a 133MB/sec bandwidth -- it seems to sustain 100MB/sec ok. I don't know much about this PCI business, though.

http://www.tweakers.net/reviews/432

I'm not sure about the asynch thing -- apparanrtly NT and 2000 do it, not sure about XP

 
At 11:06 PM, Blogger Niall said...

async and directIO will both be being used on NTFS under windows.

I haven't actually done this test (datafile creation speed never bothered me and I don't set autoextend) but you should experiment with varying the OS block size (sector size I think its called when formatting) you don't *have* to use 512 bytes if you don't want to.

In addition make sure that the disk you are using for the datafiles is properly defragmented first (or has never been used).

 
At 8:40 AM, Blogger David Aldridge said...

Niall,

Actually I should have mentioned that after testing this on NTFS I retested on a raw device, so that all the messy file system stuff could be bypassed. Didn't see any convincing performace improvement, though.

 
At 5:01 PM, Blogger David Aldridge said...

Axr,

I used perfmon, so I got nice little charts plotted.

 

Post a Comment

<< Home