DBAsupport.com Forums - Powered by vBulletin
Results 1 to 4 of 4

Thread: Too much Redo generation

  1. #1
    Join Date
    Feb 2001
    Posts
    75

    Thumbs down

    Hi,

    I am working on a datawarehouseing project and currently developing load processes. We use SQL ( direct) to move data to staging tables & then PL/SQL procedure to insert/update data in main tables. The tables along with indexes are with logging option and DB is running in archivelog mode.

    There is too much redo log generation. In on test it was about 5.5 of data size ( data size 72 MB ( num_rows*avg_row_len)) and redo generated was 370 MB. In another test, the data size was 52 MB and redo 1.5 GB!

    Even if you consider that there is redo for DML on tables , indexes and entries generated for RB segments for these DML, do you not think such sizes ( specially 30 times) too high?

    How is it possible to reduce redo without turning to NOLOGGING?

    TIA.

  2. #2
    Join Date
    Oct 2000
    Location
    Cambridge, MA (Boston)
    Posts
    144
    there's a pretty simple formula for redo control in oracle.
    work==redo
    more work==more redo

    the only control you have is to do data scrubbing outside oracle and use NOLOGGING. The only reasons i can think of for creating indexes with logging is standby databases, and totally hands-off (as possible) database or instance recovery after database failure. remember the redo is not only for your work on the data, but all changes to the oracle database. this includes rollback segments and temp space that is a datafile (opposed to a "real" tempfiles--the ones that show up in dba_temp_files, not in dba_data_files).

    so use nologging, direct operations where possible (that's the point of a datawarehouse), and temporary files.

    d.


  3. #3
    Join Date
    Nov 2000
    Posts
    344
    You can do data scrubbing in global temporary tables if you are one 8i. That will help a little.

    -John

  4. #4
    If you are doing these updates to the main tables in one daily run, why don't you drop the indexes... run your update statements, and then recreate them... I'm not sure of the time factor involved vs not rebuilding the indexes.

    I'm doing the same project, I'd like to see how your updates go.

    Are you doing MASS updates? Perhaps you can check if your data is changing before making the update (if necessary)?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width