Hello : What are the few efficient ways of running an Import. What makes it faster. What parameters need to be included for sure and what's a decent average. Say a db when created a exported dump is of 1.7 GB, and a few tables of 100MB+, what timeframe I should be expecting. For a 150MB table, it took me almost 70 minutes. Is it good, bad or expected ??
Do I need a lot of RBS and temp and SGA and so on?? Any pointers, will help..
I would say your example is very slow. There are several things that you can do to speed up your import. If possible take your database out of archivelog mode. That means bouncing your database but it is well worth it if you can do it. Just remember to put it back and do backup afterwards. Additionally, droping constraints and indexes on the tables that you are importing speeds up imports even more. Play with your buffer size as well.
This will cause import to commit after each object (table),
not after each buffer. This is why one large rollback segment is needed.
Create and use one large rollback segment, take all other rollback
segments offline. One rollback segment approximately 50% of the size
of the largest table being imported should be large enough. Import
basically does 'insert into tabX values (',,,,,,,')' for every row in
your database, so the rollback generated for insert statements is only
the rowid for each row inserted. Also create the rollback with the
minimum 2 extents of equal size.
As with the rollback segment, create several large redo log files, the
larger the better.
Check the alert
log for messages like 'Thread 1 cannot allocate new log, sequence 17,
Checkpoint not complete'. This indicates the log files need to be bigger
or you need more of them.
If possible, make sure that rollback, table data, and redo log files
are all on separate disks. This increases throughput by reducing