Okay, will be migrating my current 734 system to 9i2 here in a couple weeks. We are about finished with our testing and are working on the migration plan. We have done the migration a couple times in the test environment and are looking at ways to speed the export up.
Current argument is to drop the major indexes and then do the export. REbuild the indexes on the new machine. I thought we could just do the export without the indexes. Either way should speed the export up considerably.
My current export is a shade over 3gigs and takes about 1h40m on my system with all users still on it.
As a test, I went to my development box to time out the process. I did a full export that was 1.7gig and took 28mins. I then did the same export adding the indexes=N option.
Both exports took the same time and are of the same size. Why? I thought the indexes=N option should have cut down on export time and dump file size.
I think that the export with indexes=y just saves the ddl required to create the index. To save the actual data in the index would be redundant. But your plan might make the import faster in any event.
Gandolf is correct, only the DDL is exported for the indexes. Either way you will be building an index so why not let import do it. Import will load the table first than build the indexes, it will also implicitly do an analyze. So I say let import do it.
You may want to specify ...
BUFFER=? (I use 4000000)
in your parfile, this is usefull for large tables since Oracle will now commit at a set interval rather than trying to swallow the whole table.
Okay, I can see how the ddl is save. But to continue the thought, what about if the indexes are dropped? That is my next test. Would that reduce my import time and file size? That index data is stored somewhere.
I am going to turn the statistics off as well. Once we do the migration, we are going to be modifying quite a few of the objects and creating new objects. We plan to do a schema analyze as a last step so stats will be taken care off.
You can drop them, sure. But frankly 1.7 is a small database so why bother? I don't think you will save a lot of time, just make sure you do a full backup so that you have the DDL. You may want to take a full export with
This will export all the structure (indexes included) and leave the data behind. It takes about 1 minute to run and is a lot easier to read from than a full .DMP file, just an extra precaution IN ADDITION TO YOUR FULL EXPORT!
How long does the full import take?
Are you using constraints (R.I.) on this box?
It should not be that bad even with indexes and analyzes.
1.7 gigs is my dev box. My production export is taking 1hour and 45 mins and is a shade over 3Gigs. Now, I know the time issue is because I am exporting a read-consistent version of the database while users are still on. So, on migration day, the export will speed up once I kick all users off.
Full Import is taking about 45mins or so. I am not worried about that. I can't really shave time there. But, if I can cut that export time in half, let say 45mins vice 100, then thats good. I am a 24x7 shop and I am trying to minimize the down time.
I will be doing an export both full and rows=n off my test box before we wipe it and bring in the production data. I am currently doing up a version of my instance on my desktop so I can use TOAD to do a schema comparison after we are done. If I can get my second server up and going by end of month, will be a comparison version on it.
Sorry, not sure what constraints(R.I.) is so can;t tell ya.
well I said data didnt I, coz the peep seems to insist by dropping the index he can speed up the export which is NOT true at all because index data are NOT exported, when you import you dont import index data, you create the indexes