I have a database used for datawarehousing. I set my
db_block_size to 16K. I have heard conversations that
a OS block size of 512 k will greatly improve reading performance for DSS on Widows NT. It is that so? What is the correlation between OS block size and db_block_size? Did anyone tried to optimize the I/O by changing OS block size from a default 64K to 512k? The database is Oracle 9i running on Windows 2000.
Do you use Volume Manager for your disks?
Volume Manager overrides the OS block size. For example, Veritas volume manager's default I/O block size is 8K.
I am running Windows 2000 not Unix. I am not familiar
with Unix volume manager.
It has been my experience that if you use the 16K block size, you can/should adjust the data read amount to match the OS read buffer size (amount of data the io subsystem returns in 1 read.) If this is 512K, then a 32block multi-block-read-count would be appropriate.
I am not sure what you mean by os block. Is it the read buffer or the cluster size (1000-512 byte sectors.)?
A 512K disk block feels a little large. I don't know that I have seen a read amount over 32K at the io level.
Joseph R.P. Maloney, CSP,CDP,CCP
'The answer is 42'
We ran 2 tests so far and the results are somewhat puzzling.
#1) Low level format of our disk data array to 1024k blocks.
Our Raid controller was set to 1024k stripe, (Raid0)
Oracle DB_BLOCK_SIZE=16384 , DB_BLOCK_BUFFERS=500M
#2) Low level format of our disk data array to 64k blocks.
Same Oracle config.
When building a identical tablespace on Item#2(driveset) it ran 15x faster than #1. I thought the larger the formatted blocksize on the disk itself, the quicker the throughput... and better oracle performance.. not true.. but why??
Click Here to Expand Forum to Full Width