-
What is the fastest way to do an import.. 2 of my tables are of 30million rows together..
waht should be the temp and will larger sga with db_buffers only choice.. please throw in your best fast..
-
use commit=Y
otherwise you will have troubles with your rollback segments.
Gert
-
But why does it fail and complain on indexes and constraints and temp TSpace is full.. after a reboot temp is free again..
-
Import is just another proces that stuffs data into the database. Having your database tuned improperly will always be the bottleneck in an import.
That being said, there are a couple of ways you can help imp to do his job:
1. Use direct=y in export. A larger db_block_buffers will not help you if you used direct=y because the insert operation bypasses the cache and drops the data into the files directly.
2. Don't pre-create you indexes, let imp create them when he wants to. Pre-creating you indexes will create an unnecessary burden on your index disks.
3. use the commit=y flag along with an adequate buffer= flag. The default buffer size is inadequate. We have seen performance increase by up to 4x when commit=y and buffer=4000000.
4. Make sure your data files are seperated from your TEMP and RBS.
Jeff Hunter
-
TEMP is used when creating indexes and constraints. COMMIT=y will probably not help out here. You only solutions are to create a larger TEMP segment (preferred) or pre-create your indexes (will cause unnecessary I/O during import).
Jeff Hunter
-
Hi Marist:
Thanks for the feedback.. too fast..
could you please explain ur point#2 again.. do u mean to say that i drop them after export and get them thru the dump..
how long should a table of 25MM rows take??
on point 3: buffer is in blocks right..
thx
-
-
I make a pardon i didnt see other posts about direct
The other way which could go faster is first generate the indexfile when import by using indexfile=xxxx.txt
Then when import use indexes=n, this wont create the indxes so you will get a faster import, after you could use that xxx.txt to egnerate the indexes by executing it as a script
-
2. Kinda depends on what you are doing to import your data. If you have dropped the table after exp and are trying to get it back, then the indexes will already be dropped. When imp inserts a row, the database performs just one I/O.
If you are just truncating the tables you want to import, then the indexes will remain in tact. Then, whenever imp inserts a row into the table it will also be reflected in the index (two I/O's)
It's a trade-off of sorts. If your data and indexes are distributed well, the import will take about the same time either way. Pre-creating the indexes really puts a burden on the I/O subsystem when things are not distributed well.
3. Yes, db_block_buffers is in number of db_block_size blocks.
How long will it take to import 25M rows? That depends on your CPU, how much RAM, How fast are your disks, how your database is laid out, how much RAID cache, size of the row importing, etc. On a 2x400 Mhz RISC with 15Krpm disks and 256M RAID Cache, I would expect that import to be done in an hour. On a Windows NT box with 1x200Mhz CPU and 5400rmp disks I would expect the import to run 2 days and you would have to reboot at least a dozen times.
Jeff Hunter
-
You hate Windows NT, don't you? :)
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|