-
trying to import to a table in the database but the import contains 42 million rows and it is taking 60 hours to complete! it is 4gb.
what can I do to rectify this or speed things up
here is the script I ran to do the import
mknod /tmp/imp_pipe_ccd p
uncompress < expabc.dmp.Z > /tmp/imp_pipe_ccd &
imp file=/tmp/imp_pipe_ccd userid= fromuser=abc touser=abc log=imp_abc.log commit=y ignore=y
indexes=N buffer=64000
-
Could be any number of things:
1. how many CPU's do you have? The uncompress in the background could be sucking CPU
2. Is your I/O distrubuted
3. are you in archivelog mode?
4. was the export direct=y?
Jeff Hunter
-
re
did all that and still having a problem with the slow load
-
Was the table analyzed before export?
Sam
Thanx
Sam
Life is a journey, not a destination!
-
RE:
YES the tables were analyzed before the export
-
Since you use IGNORE=Y option I presume the table exists in your destination database before you start the import. If there are indexes defined on it, this will considerably slow down the import and the option INDEXES=N will not be of much help. If this is the case drop the indexes prior to import and recreate them afterwards (probably with PARALLEL and NOLOGGING).
Jurij Modic
ASCII a stupid question, get a stupid ANSI
24 hours in a day .... 24 beer in a case .... coincidence?
-
re
dropped the index still not much help, the table is taking too long to load. by the way does anyone know how you would extract data from a table to flat files?
can toad do an export if the data is over 2gb
also please help how would i recreate the index in parallel and no logging what is the command thanks a lot
-
1 The BUFFER size is too small. Increase it to 100MB.
2 Use DIRECT=TRUE option while importing a large data.
3 Set the table into NOLOGGING mode.
4 If you have multiple CPU, use PARALLEL option while creating the table and as well as importing the data into table.
5 Copy the data file into the server where Oracle is running. If the data file is in some other server, you will see the network bottleneck.
I saw 80% reduction in import time when I used DIRECT=TRUE option.
[Edited by tamilselvan on 07-25-2001 at 12:23 PM]
-
Re: re
Originally posted by imose
dropped the index still not much help, the table is taking too long to load. by the way does anyone know how you would extract data from a table to flat files?
can toad do an export if the data is over 2gb
also please help how would i recreate the index in parallel and no logging what is the command thanks a lot
You can enter into ssqlplus and spool the information out to a file
-
re:
Hey guys,
the import sped up and got completed way ahead of schedule
apparently what happened was that two imports were concurently running. When One ended the other sped up.
Thanks for your advice
Now the direct=true that would be added to the import script for future reference?
and how do you tell the buffer size to use for next time is ther e some form of calculation to use?
Thanks everyone!
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|