LRU LIST
DBAsupport.com Forums - Powered by vBulletin
Results 1 to 5 of 5

Thread: LRU LIST

  1. #1
    Join Date
    Feb 2002
    Posts
    267
    hi folks,
    I wanted to know how the LRU and Write list works.
    Does the LRU list also contain dirty buffers; if so
    when are these dirty buffers moved to the write list.
    If it is moved to write list, does it mean that it is
    not present in the LRU list. What if some body accesses
    the same buffer, when it is in write list. Will it be brought
    back to the LRU list.

    And where does server process searches for a empty buffer, and how it proceeds ?
    Pls explain me in detail

    Regards
    Sonia

  2. #2
    Join Date
    Oct 2001
    Location
    Madrid, Spain
    Posts
    763
    Hi again Sonia,

    I think if you read the link I posted in your other thread, your answer here will be answred too

    Cheers

    Angel

  3. #3
    Join Date
    Apr 2001
    Location
    London
    Posts
    725
    There are LRU latches in shared pool and buffer cache.
    I assume you require info on buffer cache LRU.

    Here is a quick overview.

    LRU list keeps most recently accessed blocks in memmory.
    Dirty List points to blocks in buffer cache that have been modified but not written to disk.

    server process looks for block it's after in buffer cache using hash function.
    If it is there it is moved of MRU end of LRU list, this stops it being aged out.

    If block is not found block is read from data file.

    server process searches lru list for free block.
    while searching lru, dirty buffers sent to dity list.
    if dirty list reaches threshold DBWn is signalled to write out to disk.
    If free block cannot be found DBWn is signalled to flush.
    If block is not consistent, consistent image is built from earlier version of block and rbs.

    Yes.. if block has pointer in dirty list and is reused, it will be placed at MRU end of lru list. It will be placed back onto dirty list when dbwr checks LRU list and notices it there. Eventually it will be written to datafiles via DBWn flush/checkpoint.








    [Edited by Sureshy on 04-03-2002 at 03:33 AM]
    Once you have eliminated all of the impossible,
    whatever remains however improbable,
    must be true.

  4. #4
    Join Date
    Feb 2002
    Posts
    267
    Thanx suresh,
    Still some clarification required.

    Does server process go to LRU list when it does
    not find any empty buffers in the Database buffer
    cache ? In case if it doesn't find empty buffers
    it moves the dirty buffers from LRU List to Dirty
    list to place the newly read buffers. Is it true?
    Why it doesn't move unmodified buffers in the LRU
    list, what if there are no dirty buffers in the
    LRU list ?
    Could u pls explain this.
    Hope i am not asking wrong question.

    Regards
    Sonia

  5. #5
    Join Date
    Mar 2002
    Posts
    171
    This is the best explaination I've come across so far (precise and to the point):

    This note discusses the basic functioning of DBWR, which gathers dirty buffers from the buffer cache and
    writes them to the database, thus freeing them to be reused by other queries. Buffer blocks in the buffer
    cache of the SGA are first tracked by a buffer queue. The head of the list contains the hottest buffers,
    often called the MRU, or most recently used end. The tail of the list is where foregrounds start looking for
    an available buffer to use. This is the LRU or least recently used end of the queue. As blocks are
    touched, they move to the MRU end of the queue. The assumption is made that blocks on the LRU end
    are least likely to still be needed, and will therefore not be missed if they are reused for other data.
    Dirty buffers, those changed by user transactions, are tracked by the dirty queue. The dirty queue is a
    linked list of buffers that need to be written to disk. A buffer cannot be on the buffer queue and dirty queue
    at the same time. This is where DBWR locates the buffers to write to the database. Once written, DBWR
    places the buffers back onto the LRU end of the buffer queue for reuse.
    When a user process needs an empty buffer, it starts at the LRU end of the buffer queue looking for a
    buffer it can reinitialize. If it cannot find enough available buffers, and decides to ask DBWR to make free
    buffers, a flag is set so that a request is sent to DBWR. The following steps are followed by the user
    process to find a buffer.
    1. Search the LRU end of the queue up to a set point, called the “scan depth”. If enough free
    buffers have not been identified, set an SGA flag indicating that we are waiting for buffers and tell
    DBWR to make free buffers, and sleep on the “free buffer event”.
    2. If the buffer we are looking at has either “users” using it or “waiters” waiting to use it, it cannot be
    used.
    3. If the buffer we are looking at is dirty it cannot be used. This should not happen if DBWR is
    keeping up and the cache is big enough. If the dirty queue has reached the max dirty queue
    length, we give up as in step 1. Otherwise, we move the buffer to the tail of the dirty queue and
    increment the count of buffers moved to the dirty queue.
    4. If we moved any buffers to the dirty queue, or if the count of known clean buffers is less than half
    of DBWR “scan depth”, then set local flag to tell DBWR to make free buffers.
    Once DBWR receives the request to write dirty buffers, it performs the following actions:
    1. If any buffers were moved from the LRU queue to the dirty queue by users, then the scan depth
    increment is added to the DBWR scan depth (up to max scan depth). This should only happen if
    DBWR is getting behind. The count will always be zeroed if the dirty queue is empty. This can
    happen when other messages, such as checkpoint, have emptied the dirty queue.
    2. In order to maximize efficiency of physical writes, DBWR will try to write bull baches of buffers. If
    there is less than a full write batch of buffers on the dirty queue, then scan the LRU end of the
    buffer queue. Dirty current buffers are moved to the tail of the dirty queue. Buffers that are
    scanned but not moved are counted as clean buffers scanned. The scan terminates when either
    there is a full write batch on the dirty queue, or the max scan depth is reached. To avoid creating
    an excessive clean buffer count, buffers already moved by foregrounds from the LRU end of the
    buffer queue to the dirty queue are considered already scanned for purposes of deciding when
    the scan depth is reached.
    2 of 2 Copyright© 2001 Managed Ventures LLC, All Rights Reserved
    3. The known clean buffer count is set to the number of clean buffers encountered while scanning
    the LRU end plus the number of buffers that will be written. It is assumed that all buffers on the
    dirty queue up to a full write batch will be written. The buffers to be written are included because
    the main use of the known clean buffer count is to decide when to message DBWR.
    4. Write all the buffers on the dirty queue up to a full write batch. When the write is done the buffers
    are moved from the dirty queue to the LRU end of the buffer queue.
    5. If the known clean buffer count is less than 1/2 the DBWR scan depth, then the scan depth
    increment is added to the DBWR scan depth (up to max scan depth). This indicates DBWR is
    getting behind because he needs to be messaged as soon as he is done.
    Since Oracle version 7, several methods have been employed to prevent DBWR from falling behind. On
    older servers, with slow disks, DBWR was often slowed by I/O operations. Asynchronous I/O was only
    available on some platforms. Typically, on other platforms, multiple DBWR processes were used to try to
    keep up. These additional processes were slave processes, unable to make asynchronous I/O calls.
    Oracle tried to emulate asynchronous I/O by way of this master-slave arrangement. In Oracle 8, the
    concept of multiple master DBWRs was introduced. Each DBWR master managed its own latch set and
    own set of buffers. The drawback to the master-slave method, or the multiple writer method is the extra
    overhead involved. Enabling these features, requires that extra shared memory be allocated for IO
    buffers and request queues and extra CPU cycles. In Oracle8i, the database kernel supports inherently
    asynchronous I/O regardless of the platform. The DBWR is able to continuously write, without waiting for
    other calls to complete.
    Another event that tells DBWR to writing buffers is the checkpoint. A checkpoint accumulates a partial
    batch of buffers, and fills out the write batch with buffers from the dirty queue.
    When a process moves a buffer to the tail of the LRU and the buffer does not need to be written to disk,
    the count of known clean buffers is increased. This happens mostly because of full table scans.
    This keeps the repeated reuse of the same buffers from forcing DBWR to make free buffers needlessly.
    In addition, every 3 seconds DBWR wakes up and does a timeout action. If there have been no writes
    since the last timeout, then up to 2 times the max write batch of buffers will be scanned for dirty buffers.
    This timed event happens in order to flush dirty buffers out of the cache when activity is low. If it is known
    that all buffers in the cache are clean, and no more have been dirtied, then a slow checkpoint will be
    started. The checkpoint will not be repeated unless more changes are made and the instance is idle
    again.



    COURTSEY: ManagedVentures.Com

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width