DBAsupport.com Forums - Powered by vBulletin
Page 1 of 2 12 LastLast
Results 1 to 10 of 14

Thread: Spreading redo log files

  1. #1
    Join Date
    Dec 2001
    Location
    Tel-Aviv,Israel
    Posts
    233

    Spreading redo log files

    Hi Guys,

    In a statspack report of DWH db (HP-UX 11,8.1.7.3),i've realized that there are lot of waits regarding the redo log files:
    log file sync - 63,700 waits
    log file parallel write - 308,734
    log buffer space - 23,398
    LGWR wait for redo copy - 21,486
    log file switch completion - 469
    DBWR checkpoints - 218

    We have 6 groups with one member in each group of 80M.
    The most activity in this db occurs between 19:00 untill 07:00 (lots of loadings and transactions).
    In the period of 12 hours ,218 checkpoints have been occured.(avg. of 1 checkpoint every 3 min.)

    Following the spreading of the members :

    SQL> select member from v$logfile;

    MEMBER
    --------------------------------------------------------------------------------
    /oracle1/app/oracle/oralogs/dwprod1/redo_01.log
    /oradb1/oralogs/dwprod1/redo_02.log
    /oradb2/oralogs/dwprod1/redo_03.log
    /oracle1/app/oracle/oralogs/dwprod1/redo_04.log
    /oradb1/oralogs/dwprod1/redo_05.log
    /oradb2/oralogs/dwprod1/redo_06.log

    6 rows selected.

    SQL> select group#,bytes/1024/1024
    2 from v$log;

    GROUP# BYTES/1024/1024
    ---------- ---------------
    4 80
    5 80
    6 80
    7 80
    8 80
    9 80

    6 rows selected.

    In the filesystems /oradb1 ,/oradb2 and /oradb3 sit all the datafiles.
    the application sit on /oracle1.

    I know that i should increase the log buffer (LOG_BUFFER = 409600),but I have 2 more indecisions:

    1. Should i increase the redo logs files? in what percentage?
    2. Should i need to change the spreading of the log files?
    Maybe putting them in one filesystem ?
    Maybe seperating them seperately from the datafiles?

    Thanks in advance,
    Nir

  2. #2
    Join Date
    Nov 2002
    Location
    Geneva Switzerland
    Posts
    3,142

    Re: Spreading redo log files

    Originally posted by nir_s
    Maybe seperating them seperately from the datafiles?
    That's where I would start (if I had to). Ideally on physical drive(s) that has no other access during peak times.

    On the other hand, if the night-time processing finishes easily in time and the users aren't waiting during the day . . . do you really have a problem?

  3. #3
    Join Date
    Nov 2000
    Posts
    440
    For the log_buffer, put it at 1048576.

    Since the log buffer is flushed:

    o every 3 seconds
    o every commit
    o when 1/3 full
    o when 1meg full

    By putting it to 1 meg, at least you dont have to think to tuning that section after.


    This link explain on what kind of disk to put it and with what.

    http://asktom.oracle.com/pls/ask/f?p...D:359617936136

  4. #4
    Join Date
    Nov 2000
    Location
    greenwich.ct.us
    Posts
    9,092
    Since Oracle writes to the redo logs in a sequential manner, there is no point to "spreading them out" on multiple devices. I would pick the least busy device during your load and put all of them there. Hopefully you don't choose a RAID 5 device as redo logs do 100% write activity.
    Jeff Hunter

  5. #5
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    Did you archive the redo logs?
    If so, place the redo logs under diff disks/diff controller.
    Ideally, redo logs should be placed on the raw devices.
    I think you system does not have raw devices.

    80 MB is small, as per the load. Increase its sizes to 200MB.

    Also, ensure that check point occurs after the redo is completely filled by adjusting "LOG_CHECKPOINT...." parameters.

    If your Release is 8i/9i, increase log_buffer to 4MB.

    Tamil

  6. #6
    Join Date
    Sep 2001
    Location
    Ohio
    Posts
    334
    If your Release is 8i/9i, increase log_buffer to 4MB.
    Tamil -
    My understanding is that the log_buffer flushes at one-third full or at 1Meg, whichever comes first. In that case, there's no need to have your log_buffer > 3 Meg.

    Please correct me if I'm wrong.

    Or, has it changed in 9i?

    Jodie

  7. #7
    Join Date
    Dec 2001
    Location
    Tel-Aviv,Israel
    Posts
    233
    Hey guys,

    Thank you all!

    Meanwhile,i've issue the following changes:

    1. Increasing LOG_BUFFER from 400k to 800k
    2. Setting log_checkpoint_interval = 999999999 and log_checkpoint_timeout = 0
    3. Re-creating the redo log files much bigger - from 6 groups of 80m each member,to 4 groups of 150m each member. In addition,i've placed all the members in the same filesystem , but in different controller than the datafiles. (The db is in NOARCHIVE mode).
    4. Increasing DB_BLOCK_BUFFERS from 20480 to 60000

    A huge activity suppose to be operated this night in this database
    ,so i will examine the influence of the above changes.

    Thanks again,
    Nir

  8. #8
    Join Date
    Jan 2002
    Location
    Up s**t creek
    Posts
    1,525

    Re: Spreading redo log files

    Originally posted by nir_s
    We have 6 groups with one member in each group of 80M.
    Just an aside I know but.... Any reason why you only have one member per group?
    Jim
    Oracle Certified Professional
    "Build your reputation by helping other people build theirs."

    "Sarcasm may be the lowest form of wit but its still funny"

    Click HERE to vist my website!

  9. #9
    Join Date
    Feb 2001
    Posts
    295
    3. Re-creating the redo log files much bigger - from 6 groups of 80m each member,to 4 groups of 150m each member. In addition,i've placed all the members in the same filesystem , but in different controller than the datafiles. (The db is in NOARCHIVE mode) .
    If your database is running in NOARCHIVELOG, why care with the redo logs? Is your backup solution efficient enough to prevent the risk of overwriting transaction data? If not (or if it's not important at all) maybe you should just use direct-path loading, no logging, and forget about redo logs and checkpointing.
    An ounce of action is worth a ton of theory.
    —Friedrich Engels

  10. #10
    Join Date
    Dec 2001
    Location
    Tel-Aviv,Israel
    Posts
    233
    Hey adrianomp,

    The data is important in this DWH database but not critical.
    The problem is that after the loading,there are a lot of data manipulations.
    The backup solution we have for of this database is:
    1. export norows every night.
    2. Cold backup once a week in the weekend.

    Most of the applications load their data via sql-loader in direct-path.

    The changes i've made increase the performance,but we still have a huge value to 'log file parallel write':

    Top 5 Wait Events
    ~~~~~~~~~~~~~~~~~ Wait % Total
    Event Waits Time (cs) Wt Time
    -------------------------------------------- ------------ ------------ -------
    SQL*Net message from dblink 11,090,384 255,414 49.72
    log file parallel write 21,962 74,411 14.49
    free buffer waits 719 70,335 13.69
    db file sequential read 125,377 38,410 7.48
    control file parallel write 5,249 19,869 3.87
    -------------------------------------------------------------

    Any suggestions?

    Nir

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width