DB BLOCK SIZE in Banking Applications - Page 2
DBAsupport.com Forums - Powered by vBulletin
Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 24

Thread: DB BLOCK SIZE in Banking Applications

  1. #11
    Join Date
    Jun 2001
    Location
    Helsinki. Finland
    Posts
    3,938
    Originally posted by jmodic
    You can find pros and cons for everything. You mentioned wasting space in buffer cache with 32K block size. But at the same time we are talking about banking application where speed is of the primary concern. Well, I don't think any bank will have a problem investing few hundred or thousand $ in buying enough RAM if it realy wants fast responces from its database. And 32K blocks realy can improve performance compared to 8K because of much less I/O. Data is much more tightly packed in table blocks, and what is even more important - indexes will be much "flatter" with 32K, resulting in saving I/O with each and every query in OLTP system.

    Jurij, if I recreate my DBs (in an investment bank) with block size of 32K, I will have so much false pinging that speed will slow down really a lot.

    My opinion: 32K is for DSSs, not for banking systems.

    Oracle Certified Master
    Oracle Certified Professional 6i,8i,9i,10g,11g
    email: ocp_9i@yahoo.com

  2. #12
    Join Date
    Dec 2000
    Location
    Ljubljana, Slovenia
    Posts
    4,439
    Sure, Julian, but this is only true in OPS/RAC environment.

    I think we are talking about single instance environment here. However it si good you brought this up, because in OPS this definitely is one of the most important factors when deciding the database block size (of course if applications are not/can't be partitioned in a great extent). But for single instance databases I still don't see any realy good reason why OLTP should not have big block size.
    Jurij Modic
    ASCII a stupid question, get a stupid ANSI
    24 hours in a day .... 24 beer in a case .... coincidence?

  3. #13
    Join Date
    Jun 2001
    Location
    Helsinki. Finland
    Posts
    3,938
    Originally posted by jmodic
    Sure, Julian, but this is only true in OPS/RAC environment.

    I think we are talking about single instance environment here. However it si good you brought this up, because in OPS this definitely is one of the most important factors when deciding the database block size (of course if applications are not/can't be partitioned in a great extent). But for single instance databases I still don't see any realy good reason why OLTP should not have big block size.
    You are right. One more thing: there are banks and banks and some banks have much more data than others. If the bank's DB is really a megabase, then I would also go for 32K. And as you (or someone else) mentioned: you should have enough RAM for 32K blozk size.

    Oracle Certified Master
    Oracle Certified Professional 6i,8i,9i,10g,11g
    email: ocp_9i@yahoo.com

  4. #14
    Join Date
    Dec 2001
    Posts
    221
    Originally posted by Sweetie
    not agreed

    in oltp small amount of rows are fetched, if let's sayin a query has to returns 10 rows in 10 blocks then instead of 80K Oracle have to read 320K into memory

    Has anyone benchmark this? Or this just theory you peeps are talking about?
    yes , anybody has answers for this question ??????
    Santosh Jadhav
    8i OCP DBA

  5. #15

    Talking

    It is not so simple. You could have some loss in performance if the page is too big. There is no universal solution just to make a 32K block simply because is "a regular banking database/application"... What is a "regular" anything in this domain???
    So, let's get real! Nobody but you could take a corect decission! But we could recomend a way. So, start with 8K and test. You could be surprised to see the best results you will obtain with, let's say, 16K, and nobody here recommended you such a thing!!! ...
    ovidius over!

  6. #16
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135


    Posted by Sweetie

    in oltp small amount of rows are fetched, if let's sayin a query has to returns 10 rows in 10 blocks then instead of 80K Oracle have to read 320K into memory

    Has anyone benchmark this? Or this just theory you peeps are talking about?

    This is not correct.
    Take an example

    8K BLOCK SIZE:
    A table has 200 rows. Its block size is 8k. Assume that 20 rows can fit into one block. So, the total number of blocks needed is 10 blocks.
    The total memory needed to bring the data is 10*8 = 80K.

    32K BLOCK SIZE:
    With 32K block size, the same table needs only 3 blocks, because each block stores 80 rows.
    The total memory needed to bring the data is 3*32 = 96K.

    I agree there is a small wastage in the memory if the block size is set to 32K, this wastage can be offset with the real benefits reap out in the disk space management. However, Oracle SGA is designed to keep the buffer cache as long as possible, until it ages out based on the LRU algorithm.

    One can argue that what if the 10 rows come from 10 different blocks of 32K size, then there is a more memory wastage than that of memory wastage with 8K size?

    It is very difficult to answer those type of question.

    In my experience, I have not seen any true OLTP system in the world. Even if you say OLTP, many real time reports are running against the OLTP system only. OLTP performance takes dive when the real time reports are running. So it is always a hybrid system.

    That is why I recommend 32K block size.


  7. #17
    Join Date
    Oct 2000
    Posts
    139
    8K BLOCK SIZE:
    A table has 200 rows. Its block size is 8k. Assume that 20 rows can fit into one block. So, the total number of blocks needed is 10 blocks.
    The total memory needed to bring the data is 10*8 = 80K.


    That would be a perfect world if you asume your query is going to fetch rows from a single block which is almost never true, so this is not an argument it is just a fact. Plus in OLTP since all reads are almost indexed we are even reading more blocks into memory!

    It is correct that many OLTP systems are hybrid but that cannot justify that a big block size is good, enough big is enough

    Plus even we are making the block big and we say that minimize the I/O, but hey we are telling ourselves that I/O is minimized at Oracle level, is this true at OS level? Are we minimizing at OS level? If not what is the use of minimizing at Oracle level. At end of the day Oracle runs on an OS and it is limited by OS limitations.

    Assuming most UNIX/NT I/O are 64K I do agree that at some point I/O is minimized because we make Oracle to do less work by just reading 2 blocks if we have a block size of 32K and reading 8 blocks if block size were 8K, but can anyone tell me what is the difference between system calls of 2 oracle data block reads and 8 oracle data block reads? I would try it but I am sorry because I dont have any 32K block sizes databases, since the moderators are recommending this size they mush have vast experience with these 32k block sizes database and they can probably give it a shot and try it then post some benchmark results?

    I think we should run strace at OS level to benchmark the system calls with different block sizes. And SAR the system I/O. Dont you think so?

  8. #18
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    I agree most of your points. I will post my bench mark results very soon. Trust me.

  9. #19
    Join Date
    Dec 2000
    Location
    Ljubljana, Slovenia
    Posts
    4,439
    Originally posted by Sweetie
    That would be a perfect world if you asume your query is going to fetch rows from a single block which is almost never true, so this is not an argument it is just a fact. Plus in OLTP since all reads are almost indexed we are even reading more blocks into memory!
    Wrong, we are reading less blocks into memory, hence drectly less I/O! With index reads you are typicaly spending 3-4 I/O operation just to traverse index from root to leaf node with 4K or 8K block size (on typicaly large OLTP table). The same index on 32K block size will tipicaly be one level flattener, thus requiring one I/O operation less. With each typical I/O query you are saving one index block read per index read so if there are two indexes involved you'll typicaly save two I/Os per typical OLTP query. And this is direct I/O saving, no matter if you look at it from Oracle's perspective or from OS's. If it's logical read you are saving a lot, if it is physical read you are saving even much more.
    Assuming most UNIX/NT I/O are 64K I do agree that at some point I/O is minimized because we make Oracle to do less work by just reading 2 blocks if we have a block size of 32K and reading 8 blocks if block size were 8K .... SNIP....
    This is totaly wrong assumption for OLTP systems. Oracle can take advantage of 64K OS I/O "bandwidth" only when performing multiblock reads (DB_FILE_MULTIBLOCK_READ_COUNT), which you'll very rearly see in an OLTP system (I guess you don't have many full table scans or fast full index scans in OLTP). Typical OLTP query is executed by INDEX RANGE SCAN followed by TABLE ACCESS BY ROWID. Both those operations are allways performed one oracle block per I/O.
    Jurij Modic
    ASCII a stupid question, get a stupid ANSI
    24 hours in a day .... 24 beer in a case .... coincidence?

  10. #20
    Join Date
    Apr 2001
    Location
    Louisville KY
    Posts
    295
    TJI:

    Consider the multiblock read count and your physical i/o buffers.

    Chances are you are going to read in 64/128/256K (or more) chunks into the physical buffers before they get passed to Oracle. The block size and multi-block read count are going to be the factors. You can do 8 blocks of 32K or 32 blocks of 8k or 16 blocks of 16k or whatever. The difference will be extremely small (I won't say non-existant) if your drive controller does 256K at a time.

    Then consider the SGA buffer pool. If it is large enough (key ri tables pinned) physical io is less of an issue.

    In the old days, the block size helped determine the number of data files and the database size. It also made a difference with the old, 'small', controllers and drives when RAM was expensive.

    RAM is cheap. Blow out your SGA.
    Joseph R.P. Maloney, CSP,CDP,CCP
    'The answer is 42'

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  



Click Here to Expand Forum to Full Width