DBAsupport.com Forums - Powered by vBulletin
Page 1 of 4 123 ... LastLast
Results 1 to 10 of 39

Thread: when log switch occurs....

Hybrid View

  1. #1
    Join Date
    Feb 2007
    Posts
    20

    when log switch occurs....

    When log switch occurs in oracle 9i, DBWn writes contents of Database buffer cache to Data files.

    Is both uncommitted and committed data written to data files at this time or only committed data is writted to data files on checkpoints.

  2. #2
    Join Date
    Jun 2006
    Posts
    259
    All dirty blocks are written to disk.

    This includes blocks that have changes for uncommitted data.

  3. #3
    Join Date
    Jan 2003
    Location
    Bahrain
    Posts
    109
    Your dirty buffers contains both commited and uncommited data. So your DBWR writes the both types of data's(commited and uncommited) to the data file.

    Regards,
    Seelan

  4. #4
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    When a log switch occurs, NOT ALL dirty buffers (committed and uncommitted) are written to data files by the DBWn processes.

    When a log switch occurs, a SLOW checkpoint is triggered.
    There is a vast difference between SLOW checkpoint and FAST checkpoint which is triggered by "ALTER SYSTEM CHECKPOINT or ALTER TABLESPACE ... BEGIN BACKUP or ALTER TABLESPACE ...OFFLINE".

    SLOW CHECKPOINT:
    If Oracle is doing a SLOW checkpoint, the DBWR process stops to process the checkpoint when one of the two following conditions occurs:
    1. If the threshold size of the DB_CHECKPOINT_BATCH_SIZE (number of buffers) is reached
    2. When over 1000 buffers are scanned and a dirty buffer can't be found to write to disk.

    The idea is to give less stress on the CPU and IO sub system.

    FAST CHECKPOINT:
    When Oracle is doing fast checkpoint, then the DBWR continues scanning all the buffers in the cache and writes the dirty buffers into disk. The DBWR will not stop until it scans all the buffers, thus causing much over head to CPU and IO sub system.

    Before you start any backup or abort the instance under extreme ciscumstances, you must always do FAST CHECKPOINT.

  5. #5
    Join Date
    Sep 2001
    Location
    Makati, Philippines
    Posts
    857
    A brief idea about Checkpoint. See Note 147468.1 from Metalink for detail explanation
    A Checkpoint is a database event, which synchronizes the data blocks in memory
    with the datafiles on disk. A checkpoint has two purposes:
    (1) to establish data consistency, and
    (2) Enable faster database recovery.
    When a checkpoint fails messages must be verified into into the alert.log file.
    Here are some tips to tune the checkpoint process:
    · The CKPT process can improve performance significantly and decrease the
    amount of time users have to wait for a checkpoint operation to complete.
    · If the value of LOG_CHECKPOINT_INTERVAL is larger than the size of the redo
    log, then the checkpoint will only occur when Oracle performs a log switch
    from one group to another, which is preferred. There has been a change in
    this behaviour in Oracle 8i.
    · The LOG_CHECKPOINTS_TO_ALERT when set to TRUE allows you to log checkpoint
    start and stop times in the alert log. This is very helpful in determining
    if checkpoints are occurring at the optimal frequency
    . Ideally checkpoints should occur only at log swiches
    ---------------

  6. #6
    Join Date
    Sep 2001
    Location
    Makati, Philippines
    Posts
    857
    Hi Tamil,

    This is the first time I've heared about SLOW CHECKPOINTING.
    Can you give me links to documents explaining about slow checkpoint?
    Thanks.
    ---------------

  7. #7
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    Who else can provide better info other than Rama Velpuri, Backup & Recovery Guru?

  8. #8
    Join Date
    Nov 2006
    Location
    Sofia
    Posts
    630
    Tamil,
    That was very interesting explanation but I seems missed something
    1) On log switch, a checkpoint is done in order to write all the dirty blocks, covered by the just filled up redo log to the disk, so that the redo log is no more needed for instance recovery and can be reused. I am correct here right?
    2) You say:
    "When a log switch occurs, a SLOW checkpoint is triggered.
    .........
    If Oracle is doing a SLOW checkpoint, the DBWR process stops to process the checkpoint when one of the two following conditions occurs:
    1. If the threshold size of the DB_CHECKPOINT_BATCH_SIZE (number of buffers) is reached
    2. When over 1000 buffers are scanned and a dirty buffer can't be found to write to disk."

    Under these circumstences,how we guarantee that we do not need the last redo log anymore and it can be reused (which is the purpose of the checkpoint at log switch)?

    So yes, I would agree that not all the durty blocks are written. Also I do not claim that I know how exactly that happens, but I belive that some process ( LGWR or CKPT) makes list of all the dirty buffers, covered by redo vectors of the last log file, and triggers the DBWR to write these blocks.

    Please let's discuss on that
    Thanks
    Boris

  9. #9
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    That was very interesting explanation but I seems missed something
    1) On log switch, a checkpoint is done in order to write all the dirty blocks, covered by the just filled up redo log to the disk, so that the redo log is no more needed for instance recovery and can be reused. I am correct here right?
    Yes, you are partially right. In general, after a checkpoint, the redo in the redo log files is no longer needed for crash/instance recovery. However, the redo logs may be needed for instance recovery if ta single transaction's redo size is greater than the redo log file size.
    For example, You update millions of rows in table and then before commit/rollback, the system crashed even though several log switches occurred during the update process.
    Another example is when you clone the database, the recovery process may ask you to enter redo log files which are older than the current redo log.

    2) You say:
    "When a log switch occurs, a SLOW checkpoint is triggered.
    .........
    If Oracle is doing a SLOW checkpoint, the DBWR process stops to process the checkpoint when one of the two following conditions occurs:
    1. If the threshold size of the DB_CHECKPOINT_BATCH_SIZE (number of buffers) is reached
    2. When over 1000 buffers are scanned and a dirty buffer can't be found to write to disk."

    Under these circumstences,how we guarantee that we do not need the last redo log anymore and it can be reused (which is the purpose of the checkpoint at log switch)?
    The only way I can see is, do manual checkpoint, and switch log files.

    So yes, I would agree that not all the durty blocks are written. Also I do not claim that I know how exactly that happens, but I belive that some process ( LGWR or CKPT) makes list of all the dirty buffers, covered by redo vectors of the last log file, and triggers the DBWR to write these blocks.
    You can experiment with a test case.
    Update millions of rows.
    Do log switch.
    Shutdown abort
    Start up the instance. Note down the recovery time.

    Do the same exercise with "alter sytem checkpoint", and you see the vast difference in recovery time.

  10. #10
    Join Date
    Sep 2001
    Location
    Makati, Philippines
    Posts
    857
    doing checkpoint manually through "alter system" and issuing a shutdown abort afterthen is basically the same of issuing a command of shutdown immediate.
    So there is no need for crash/instance recovery to perform because all datafiles have been in sync, not unless there were pending/ongoing process executed during those commands that will generate new redo blocks.
    ---------------

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width