DBAsupport.com Forums - Powered by vBulletin
Results 1 to 7 of 7

Thread: faster way to run Imports

  1. #1
    Join Date
    Oct 2000
    Posts
    449
    Hello : What are the few efficient ways of running an Import. What makes it faster. What parameters need to be included for sure and what's a decent average. Say a db when created a exported dump is of 1.7 GB, and a few tables of 100MB+, what timeframe I should be expecting. For a 150MB table, it took me almost 70 minutes. Is it good, bad or expected ??
    Do I need a lot of RBS and temp and SGA and so on?? Any pointers, will help..
    Thanks, ST2000

  2. #2
    Join Date
    Jan 2000
    Location
    *
    Posts
    37
    I would say your example is very slow. There are several things that you can do to speed up your import. If possible take your database out of archivelog mode. That means bouncing your database but it is well worth it if you can do it. Just remember to put it back and do backup afterwards. Additionally, droping constraints and indexes on the tables that you are importing speeds up imports even more. Play with your buffer size as well.

  3. #3
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    http://www.oracledba.co.uk/tips/import_speed.htm

    also set large buffer in your import parameter and commit

  4. #4
    Join Date
    Jan 2002
    Posts
    8

    ImportSpeed

    Also you may try tu use direct path method.

    other parameter to set:

    COMMIT=N
    ANALYZE=N
    INDEXES=N

    Try to not import over the network
    move your dump file on server site

  5. #5
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    there is not such DIRECT import

    also you mean commit=y right?

    commit=n will slow down


  6. #6
    Join Date
    Jan 2001
    Posts
    3,134

    Wink

    Try setting ROWS=N
    that will make it a lot faster

    MH
    I remember when this place was cool.

  7. #7
    Join Date
    Jan 2002
    Posts
    8

    ImportSpeed

    COMMIT=N

    This will cause import to commit after each object (table),
    not after each buffer. This is why one large rollback segment is needed.

    Create and use one large rollback segment, take all other rollback
    segments offline. One rollback segment approximately 50% of the size
    of the largest table being imported should be large enough. Import
    basically does 'insert into tabX values (',,,,,,,')' for every row in
    your database, so the rollback generated for insert statements is only
    the rowid for each row inserted. Also create the rollback with the
    minimum 2 extents of equal size.

    As with the rollback segment, create several large redo log files, the
    larger the better.
    Check the alert
    log for messages like 'Thread 1 cannot allocate new log, sequence 17,
    Checkpoint not complete'. This indicates the log files need to be bigger
    or you need more of them.

    If possible, make sure that rollback, table data, and redo log files
    are all on separate disks. This increases throughput by reducing
    I/O contention.

    Increase SORT_AREA_SIZE. if you use INDEXES=Y

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width