DBAsupport.com Forums - Powered by vBulletin
Results 1 to 8 of 8

Thread: Export tables taking too long.

  1. #1
    Join Date
    Mar 2002
    Posts
    1

    Unhappy

    Dear All,

    New guy here. Would appreciate if someone could advice me a bit on the backup/recovery matters.

    I am running Oracle 8.0.5 on AIX 4.3.3 on a 24/7 basis. It is NOT likely for me to shutdown database to do a cold backup.

    I had tried exporting 24 tables with the average of 1 million records per table. It took more than 3 days to complete the exporting exercise. I have in total more than 180 tables to be exported.

    Any idea how can I speed up the export/import processes?

    btw, is there any good site to learn more about the backup/recovery?


    Many thanks in advance.

  2. #2
    Join Date
    Aug 2001
    Location
    chennai,bangalore
    Posts
    840
    could u please give the export command u used

    cheese
    anandkl
    anandkl

  3. #3
    Join Date
    Jan 2002
    Location
    Up s**t creek
    Posts
    1,525
    Hi,

    Exports are fine for reasonably small backups or transportation of data but I would suggest that if your data is that crutial it has to be availaible 24/7 this is not the best option to use for large volumes simply because of your time to recover.

    Even if you where to reduce your export time significantly the import (if you had to recover) would take a while longer. You do not say how large your data set is but if it runs into Gb I would seriosuly suggest you look at other methods. There are some good papers on the Metalink site.

    Regards
    Jim
    Oracle Certified Professional
    "Build your reputation by helping other people build theirs."

    "Sarcasm may be the lowest form of wit but its still funny"

    Click HERE to vist my website!

  4. #4
    Join Date
    Jun 2001
    Location
    Helsinki. Finland
    Posts
    3,938
    Originally posted by jovery

    Even if you where to reduce your export time significantly the import (if you had to recover) would take a while longer
    True. As a rule of the thumb, remember that import should take 2 to 2.5 times more than the export.

    Oracle Certified Master
    Oracle Certified Professional 6i,8i,9i,10g,11g,12c
    email: ocp_9i@yahoo.com

  5. #5
    Join Date
    Apr 2001
    Location
    London
    Posts
    725
    There are a couple of parameters that can help performance with export :

    DIRECT=Y (Bypasses evaluation buffer making export quicker)

    INCTYPE= (Incremental Exports, won't need to export entire DB).

    type 'exp help=y' for all options.

    As Jovery has pointed out.. this is not the best option as your only solution to recover.

    I personaly export my database every 2 days ... just in case. As well as a full online backup every night



    Once you have eliminated all of the impossible,
    whatever remains however improbable,
    must be true.

  6. #6
    Join Date
    Apr 2002
    Posts
    7

    Wink

    Hi, there.

    If you want to stick to the export/import utility try using the option direct=y. Significant improvements can be made during export. also below some information on import improvements.

    Regards, Antoni

    # First document

    Improving Performance of Imports

    The Oracle Import process can sometimes take many hours or even days to
    complete successfully. Unfortunately, many imports are needed to
    perform crash recovery on a databases that, for one reason or another, are
    not functional. This makes the time needed for the import even more
    critical. There are not many ways to speed up a large import, but here
    are a few basic changes that can reduce the overall time of import.

    System Level Changes
    --------------------

    - Increase DB_BLOCK_SIZE when recreating the database, if possible. The
    larger the block size, the smaller the number of I/O cycles needed.
    This change is permanent, so consider all effects it will have before
    changing it.

    - Create and use one large rollback segment, take all other rollback
    segments offline. One rollback segment approximately 50% of the size
    of the largest table being imported should be large enough. Import
    basically does 'insert into tabX values (',,,,,,,')' for every row in
    your database, so the rollback generated for insert statements is only
    the rowid for each row inserted. Also create the rollback with the
    minimum 2 extents of equal size.

    - Put the database in NOARCHIVELOG mode until the import is complete.
    This will reduce the overhead of creating and managing archive logs.

    - As with the rollback segment, create several large redo log files, the
    larger the better. Drop any small redo logs. The larger the log files,
    the fewer log switches that are needed. Check the alert log for messages
    like 'Thread 1 cannot allocate new log, sequence 17,Checkpoint not complete'.
    This indicates the log files need to be bigger or you need more of them.

    - If possible, make sure that rollback, table data, and redo log files
    are all on separate disks. This increases throughput by reducing
    I/O contention.

    Init.ora Parameter Changes
    --------------------------

    - Set LOG_CHECKPOINT_INTERVAL to a number that is larger than the size
    of the redo log files. This number is in OS blocks (512 bytes on most
    Unix systems). This reduces checkpoints to a minimum (only at log
    switch time).

    - Increase SORT_AREA_SIZE. Indexes are not being built yet, but any
    unique or primary key constraints will be. The increase depends on what
    other activity is on the machine and how much free memory is available.
    Try 5-10 times of the normal setting. If the machine starts
    swapping and paging, you have set it too high.

    Import Options Changes
    ----------------------

    - Use COMMIT=N. This will cause import to commit after each object (table),
    not after each buffer. This is why one large rollback segment is needed.

    - Use a large BUFFER size. This value also depends on system activity,
    database size, etc. Several megabytes is usually enough, but if you
    have the memory some can go higher. Again, check for paging and swapping
    at the OS level to see if it is too high. This reduces the number of
    times the import program has to go to the export file for data. Each
    time it fetches one buffer's worth of data.

    - Always import with INDEXES=N. It is always faster to import into an
    unindexed table. Use the INDEXFILE option and build the indexes after
    all the data is loaded. Make sure SORT_AREA_SIZE is large for the
    index creation.

    # Second document

    PURPOSE
    There is very little consolidated information on how to speed up
    import when it is unbearably slow. Obviously import will take
    whatever time it needs to complete, but there are some things
    that can be done to shorten the time it will take.


    RELATED DOCUMENTS
    PR:1012699.102 -- Check SQL*Net Tracing
    PR:1023899.6 -- Import Performance Tuning Tips
    PR:1011191.6 -- LONG data Imports 1 Record per Buffer (same with LOBs???)
    TUNING CONSIDERATIONS WHEN THE IMPORT IS SLOW
    =================================================

    The Oracle Import process can sometimes take many hours or even days to
    complete successfully. Unfortunately, many imports are needed to
    perform crash recovery on databases that, for one reason or another, are
    not functional. This makes the time needed for the import even more
    critical. There are not many ways to speed up a large import, but here
    are a few basic changes that can reduce the overall time of import.


    System Level Changes
    --------------------

    - Create and use one large rollback segment, take all other rollback
    segments offline. One rollback segment approximately 50% of the size
    of the largest table being imported should be large enough. Import
    basically does 'insert into tabX values (',,,,,,,')' for every row in
    your database, so the rollback generated for insert statements is only
    the rowid for each row inserted. Also create the rollback with the
    minimum 2 extents of equal size.

    - Put the database in NOARCHIVELOG mode until the import is complete.
    This will reduce the overhead of creating and managing archive logs.

    - As with the rollback segment, create several large redo log files, the
    larger the better. Take any small redo logs off line. The larger the
    log files, the fewer log switches that are needed. Check the alert
    log for messages like 'Thread 1 cannot allocate new log, sequence 17,
    Checkpoint not complete'. This indicates the log files need to be bigger
    or you need more of them.

    - If possible, make sure that rollback, table data, and redo log files
    are all on separate disks. This increases throughput by reducing
    I/O contention.

    - Make sure there is no IO contention occurring. If possible, don't run
    other jobs which may compete with the import for system resources.

    - Make sure there are no statistics on data dictionary tables.

    - Check the sqlnet.ora in the TNS_ADMIN location. Make sure that
    TRACE_LEVEL_CLIENT = OFF

    - Increase DB_BLOCK_SIZE when recreating the database, if possible. The
    larger the block size, the smaller the number of I/O cycles needed.
    This change is permanent, so consider all effects it will have before
    changing it.



    Init.ora Parameter Changes
    --------------------------

    - Set LOG_CHECKPOINT_INTERVAL to a number that is larger than the size
    of the redo log files. This number is in OS blocks (512 bytes on most
    Unix systems). This reduces checkpoints to a minimum (only at log
    switch time).

    - Increase SORT_AREA_SIZE. Indexes are not being built yet, but any
    unique or primary key constraints will be. The increase depends on what
    other activity is on the machine and how much free memory is available.
    Try 5-10 times the normal setting. If the machine starts swapping and
    paging, you have set it too high.

    - Try increasing db_block_buffers and shared_pool_size.
    Shared pool holds cached dictionary info and things like cursors,
    procedures, triggers, etc. Dictionary info. or cursors created on
    the import's behalf (there may be many since it's always working on a
    new table) may sometimes clog the pipes. Therefore, this stale stuff
    sits around until the aging/flush mechanisms kick in on a per-request
    basis because a request can't be satisfied from the lookaside lists. The
    ALTER SYSTEM FLUSH SHARED_POOL throws out *all* currently unused objects
    in one operation, hence, defragments the pool.

    If you can restart the instance with a bigger SHARED_POOL_SIZE prior
    to importing, that would definitely help. When it starts to slow down,
    at least you can see what's going on by doing the following:

    SQL> set serveroutput on size 2000;
    SQL>begin
    SQL> dbms_shared_pool.sizes(2000);
    SQL> end;
    SQL> /

    The dbms_shared_pool is in $ORACLE_HOME/rdbms/admin/dbmspool.sql


    Import Options Changes
    ----------------------

    - Use COMMIT=N. This will cause import to commit after each object (table),
    not after each buffer. This is why one large rollback segment is needed.

    - Use a large BUFFER size. This value also depends on system activity,
    database size, etc. Several megabytes is usually enough, but if you
    have the memory some can go higher. Again, check for paging and swapping
    at the OS level to see if it is too high. This reduces the number of
    times the import program has to go to the export file for data. Each
    time it fetches one buffer's worth of data.

    - Always import with INDEXES=N. It is always faster to import into an
    unindexed table. Use the INDEXFILE option and build the indexes after
    all the data is loaded. Make sure SORT_AREA_SIZE is large for the
    index creation.
    Precreate the table, but do not create the index(es).



    REMEMBER THE RULE OF THUMB: Import should be minimum 2 to 2.5 times
    the export time.



    Large Imports of LOB Data:
    --------------------------

    Generally speaking, a good forumla for determining a target elapsed
    time for a table import versus the elapsed time for the table export is:

    import elapsed time = export elapsed time X 4

    - Eliminate indexes. This affects total import time significantly.
    The existance of LOB data requires special consideration.
    The LOB locator has a primary key that cannot be explicity dropped
    or ignored during the import process.

    - Make certain that sufficient space in sufficently large contiguous
    chunks is available to complete the data load. The following should
    provide an accurate image of the space available in the target
    tablespace:

    alter tablespace mailindx coalesce;

    select bytes
    from dba_free_space
    where tablespace_name = 'MAILINDX'
    order by bytes desc;

  7. #7
    Join Date
    Mar 2002
    Posts
    301
    Hi,

    Sorry for the interruption.............
    Mr. Clayment said he's running his database for 24 * 7. So, it means the availability of the
    instance as well as the data is very crucial for his application.
    Using export dump you cannot recover your database till the last second before the crash
    occurred.

    I would strongly recommend you to develop physical backup strategies. This is the only way
    where in u can get your data till the last second.

    Vijay.
    Say No To Plastics

  8. #8
    Join Date
    Aug 2000
    Posts
    236
    Yup! Agree fully. Looks like a prime candidate for RMAN implementation and then look for less busy time slots and schedule backups during those times.

    Nizar

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width