trying to import to a table in the database but the import contains 42 million rows and it is taking 60 hours to complete! it is 4gb.
what can I do to rectify this or speed things up
here is the script I ran to do the import
Could be any number of things:
1. how many CPU's do you have? The uncompress in the background could be sucking CPU
2. Is your I/O distrubuted
3. are you in archivelog mode?
4. was the export direct=y?
Since you use IGNORE=Y option I presume the table exists in your destination database before you start the import. If there are indexes defined on it, this will considerably slow down the import and the option INDEXES=N will not be of much help. If this is the case drop the indexes prior to import and recreate them afterwards (probably with PARALLEL and NOLOGGING).
Jurij Modic ASCII a stupid question, get a stupid ANSI
24 hours in a day .... 24 beer in a case .... coincidence?
1 The BUFFER size is too small. Increase it to 100MB.
2 Use DIRECT=TRUE option while importing a large data.
3 Set the table into NOLOGGING mode.
4 If you have multiple CPU, use PARALLEL option while creating the table and as well as importing the data into table.
5 Copy the data file into the server where Oracle is running. If the data file is in some other server, you will see the network bottleneck.
I saw 80% reduction in import time when I used DIRECT=TRUE option.