-
In addition to Jonathan Book, I would recommend to read Rama Velpuri's 24x7 book also. Both are really good.
Tamil
-
Hi Guys, I sent an email to Eyal Aronoff, author of the article discussed in this thread, inviting him to comment on our discussion. Eyal, replyed to me with the following and gave me authorisation to post the email to this thread, for which, I'm incredibly grateful:
--
Hi Greg,
Wow! I did not realize that people are still using this out dated article. It is true though; the HP box we used was limited to 64k read. I had that confirmed by
the file system guy at HP. This limitation was long removed. Now I believe it is
more like 256k. I am not sure that the tests I did were even on Oracle8i let
alone Oracle9.
I will try to answer as many of the points in that thread as I can.
1) Most Oracle database IO time is influenced more these days by the nature of
the IO sub system than by block size. Disk caching technology, advanced
threading in the operating system, advanced file system all contribute to IO
efficiency (or inefficiency). Hack, even the cheapo disk drives that sit inside
the expensive disk array have onboard cache in addition to the big cache of the
array.
2) The original question was about transaction database. In the case of
transaction database full table scans is not the most important issue. Rather
the ability of the disk cache to securely buffer writes. I still feel that most
new OSs and storage array try to optimize performance for 8k block size. So I
think a unified 8K both for file system and database would yield the best
overall results. Also there is a difference between batch intensive system and
on-line transaction type system. In batch systems there is good chance that
during the times of the run the disk cache will be overrun by data. So more
arrays with more cache would do a better job than one large array. Here the IO
speed will be determined mostly by the database ability to free area in the
buffer cache for new block creation. Once again with 8k block the buffer cache
management system will need half as many buffer scans to for the same amount of
data. On-line transaction system is less sensitive to that because the write IO
are consistent and do not come in big batches.
3) I think that for write intensive application the size of the log file block
may be even more important than the database block size. That size is determined
solely by the size of the underlying OS block size (I think it is 4k in raw but
I am not absolutely sure). Here too the type of processing the application is
doing will determine the optimal strategy. If the application has a lot of large
loads, entire database blocks will be written into the log in each log entry. So
a large log block size will be good. If however, the majority of the write are
small single row transactions, Oracle will still pad the block to a complete
block before dispatching it to the disk. This can be very wasteful if the block
size is too big. Once again the issue here is not so much the IO time but its
affect on overwhelming the write cache on the disk.
4) Last, very large IO request are threaded by most operating systems and are
interruptible. If you have a large IO request on a system that is not busy, a
single request will go faster than many small requests. However, if the system
is busy, the IO can be interrupted and go into a wait queue for small IO to
finish first. This I think explains the result you are showing of the 256k read
chunks.
I hope this helps.
Eyal
PS. You are free to post this on the thread
---
Last edited by grjohnson; 08-14-2003 at 03:52 AM.
OCP 8i, 9i DBA
Brisbane Australia
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|