buffer cache too small what happens?
DBAsupport.com Forums - Powered by vBulletin
Results 1 to 10 of 10

Thread: buffer cache too small what happens?

  1. #1
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,448
    Hi

    I was wondering how Oracle handles these 2 situations:

    1. A select is fetching 1 million rows but the buffer cache can only 500000 rows (in blocks), will Oracle fetch 500000 rows first then return to user then age out these 500000 rows fetch the rest, 500000 rows again and return to user?

    2. The same select is fetching 1 million rows again but in this case the buffer cache is large, can handles 2 million rows (in blocks), but is half full, in other words it has to age out the old buffers. The question is are the old blocks age out then fetch or it first fetch then age out?

    cheers

  2. #2
    Join Date
    Mar 2000
    Location
    Chennai.Tamilnadu.India.
    Posts
    658

    solution

    Hi, 1st June 2001 14:08 hrs chennai

    Question1:
    ------------
    A select is fetching 1 million rows but the buffer cache can only 500000 rows (in blocks), will Oracle fetch 500000 rows first then return to user then age out these 500000 rows fetch the rest, 500000 rows again and return to user?

    Solution1:
    ------------
    Once i faced a problem called BUMMIMA BUFFER when executing 350kb of SQL file package.There was a limit for executing not more than 50kb.

    Ok why dont you try with a simple infinite loop i am sure buffer overflow error will be caused.

    I am more curious on this topic.


    Question2:
    ------------
    2. The same select is fetching 1 million rows again but in this case the buffer cache is large, can handles 2 million rows (in blocks), but is half full, in other words it has to age out the old buffers. The question is are the old blocks age out then fetch or it first fetch then age out?

    Solution2:
    ------------
    When enough space is available in DBBuffer cache then no need for aging out in first case.i.e before fetch.If the required space is not there for the fetch statement then the DBWR will flush first to make room for the select statement.

    When a server needs a block it will do the following steps.
    -------------------------------------------------------------------
    I) First the server process looks for the blocks in the buffer cache using a hash function.If the block is there it is moved to the MRU end of the LRU list.This is a logical read because no actual I/O involved.

    If the block is not found it undergoes the following steps
    -----------------------------------------------------------------
    I)Server pocess will read the blocks from the data file.

    First the server process looks for free blocks from the LRU list.

    II)While searching the LRU list the server will move dirty blocks to the dirty list.

    III)If the DList exceed the size threshold the server signals the DBWR to flush dirty block from the DBB cache.

    IV)If the server cannot find a FBlockwithin a search threshold it signals the DBWR to flush.

    V)After a free block is found the server reads the block from the data file in to the free blocks in the DBB cache.The server moves from the block from the LRU end and to the MRU end of the LRU list.

    Cheers

    Padmam
    Attitude:Attack every problem with enthusiasam ...as if your survival depends upon it

  3. #3
    Join Date
    Dec 2000
    Location
    Ljubljana, Slovenia
    Posts
    4,439
    Pando,

    This same question was allready answered before (http://www.dbasupport.com/forums/showthread.php?threadid=6345), but obviously it wasn't persuasive enough. I'll try again....

    But first, a remark to padmam. What you are mentioning as "BUMMIMA BUFFER problem" has nothing to do with buffer cache. Obviously you've had problems with shared pool, not with buffer cache. In buffer cache, there could not possibly happen such thing as "buffer overflow".

    Now to pando's question. There is no difference between your situation 1 and situation 2, in both oracle will process the blocks and rows in exactly the same way. But I belive the rows will not be processed in a way you think Oracle processes them. From your question I belive your understanding of how oracle process rows from blocks is the following:

    1. Oracle reads as many needed blocks in a cache as possible
    2. Oracle identifies result set rows from those blocks in a cache
    3. It returns the result set rows to the user process
    4. It repeat steps 1 - 3 if necessary

    But what Oracle realy does is the following:

    1. Oracle identifies a needed block on disk and reads it into the buffer cache. In this step it reads either a single block (when accesing by ROWID) or a DB_FILE_MULTIBLOCK_READ_COUNT number of blocks (usually when performing full scan). If there is not enough free buffers in a cache, it ages some of them out before the new block(s) could be read into.
    2. Oracle identifies matching rows from this block(s) it has just read into the cache then either:
    a) return the rows immediately to user process that requested them
    b) stores the rows in user process's sort area if the rows need further processing (sorting the result set)
    3. It repeat steps 1 - 2 if necessary.

    So again in short, answers to both of your questions:
    1) Oracle will never read 500000 rows (in blocks) without immediatelly processing the rows as the blocks are read into the cache. If it has 500000 rows worth of free blocks in the cache, it will keep reading those blocks in free buffers and immediatelly process their rows. But when it runs out of free buffers and has stil to process onother 500000 rows worth of blocks, it will not try to make room in cache for all those blocks by aging out the 500000 rows worth of old buffers all at once. It will only age out a couple of blocks, just enough for a next read or two. It will then repeat this patern untill all the blocks are processed.

    Using this concept, it is perfectly capable of processing milions of blocks in a single query, even though you might only have your buffer cache consisting of (theoretically) only 1 block for example.
    Jurij Modic
    ASCII a stupid question, get a stupid ANSI
    24 hours in a day .... 24 beer in a case .... coincidence?

  4. #4
    Join Date
    Oct 2000
    Location
    Saskatoon, SK, Canada
    Posts
    3,925
    So would it make sense to flush the buffer, before you do this. Would it give any performance improvement on reading those million record file.

    The reson behind such thought was that, instead of letting the LRU to scan and then flush the blocks, flushing the buffer in the begining would let you save some scan time.

    Wouldn't it be a case? Just curious.

    Sam
    Thanx
    Sam



    Life is a journey, not a destination!


  5. #5
    Join Date
    Dec 2000
    Location
    Ljubljana, Slovenia
    Posts
    4,439
    Yes, I guess that would make sence in some *very exceptional* cases, but the problem is:

    There is no way to flush the buffer cache!!!! You can flush shared pool, but not the buffer cache (short of bouncing the database, of course).
    Jurij Modic
    ASCII a stupid question, get a stupid ANSI
    24 hours in a day .... 24 beer in a case .... coincidence?

  6. #6
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,448
    well what I dont understand is how Oracle returns the result set to the users, from my example of 1 million rows and 500000 rows worth (in blocks) data buffer cache from your explanationm I believe that if these 1 million rows is from a full table scan

    1. Oracle reads DB_FILE_MULTIBLOCK_READ_COUNT amount of blocks into cache, let's say 100 rows forms this DB_FILE_MULTIBLOCK_READ_COUNT number of blocks
    2. Oracle returns 100 rows to user

    Repeat step 1 and 2, let's consider that no sorting is not required

    So basically Oracle is sending sets of 100 rows back to user?

  7. #7
    Join Date
    Dec 2000
    Location
    Ljubljana, Slovenia
    Posts
    4,439
    Yes, generally speaking this is the way the rows are returned to user.
    Jurij Modic
    ASCII a stupid question, get a stupid ANSI
    24 hours in a day .... 24 beer in a case .... coincidence?

  8. #8
    Join Date
    Mar 2000
    Location
    Chennai.Tamilnadu.India.
    Posts
    658

    thanks Jmodic

    Hi, 2nd June 2001 21:16 hrs chennai

    Thanks Jmodic yes that was a shared pool problem which i forgot long time back..any how thanks for correcting me to update.

    Ok anybody interested to play with this problem practically.

    But you should try it with a R&D db if you have one.

    Step1:
    -------

    Make changes to the folowing parameter in init.ora as follows.

    1)DB_BLOCK_BUFFERS=2

    (Playing with 2 buffers never in life good chance).

    2)DB_BLOCK_SIZE leave it to default.

    Step2:
    --------
    Defaul
    SVRMGR> startup
    ORACLE instance started.
    Total System Global Area 231834892 bytes
    Fixed Size 70924 bytes
    Variable Size 78225408 bytes
    Database Buffers 153460736 bytes
    Redo Buffers 77824 bytes
    Database mounted.

    SVRMGR> startup
    ORACLE instance started.
    Total System Global Area 75818252 bytes
    Fixed Size 70924 bytes
    Variable Size 75259904 bytes
    Database Buffers 409600 bytes
    Redo Buffers 77824 bytes
    Database mounted.
    Database opened.

    confirm from the 2 instance startup the value for DB buffers has reduced.

    Also calculate the size of DB buffer cache as db_block_size*db_block_buffers .

    Step3:
    --------

    create a table and insert record as you would like.

    Make sure that
    SVRMGR> create table padmam(a number) tablespace users
    2> storage (initial 20k
    3> next 20k
    4> minextents 20
    5> maxextents 100);
    Statement processed.

    SVRMGR> begin
    2> for i in 1..500000 loop
    3> insert into padmam values(i);
    4> end loop;
    5> end;
    6> /

    Calculate the size of the table sure that will be larger than the dbbuffer cache and then select you will find it runs perfectly.

    Good lead from Jmodic logically thinking perfect but we have to dig somewhere some more info.

    Cheers

    Padmam



    Attitude:Attack every problem with enthusiasam ...as if your survival depends upon it

  9. #9
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,448
    hmm it must run perfectly otherwise no one would be using Oracle

  10. #10
    Join Date
    Sep 2000
    Location
    Sao Paulo,SP,Brazil, Earth, Milky Way
    Posts
    350
    In the case of big tables scan, to calc the use of the pool and the number of retained blocks, AFAIK we need to consider the _small_table_threshold parameter - it default to 2% of the db_block_buffers - ,which decides whether a table is a small table or not. In the proposed case, if the pool is small and you reads a table in the order of millions-records, if space permits you will end in the cache with at least the number of blocks who fitted in these 2% of the buffer.

    Regards,

    Chiappa

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  



Click Here to Expand Forum to Full Width