We've got a load process thats is taking a huge amount of time (188.8.131.52 Solaris 8) and from looking at the code the statements used are just using insert append without disabling indexes etc. My thoughts on speeding this up are:
Turn logging off for tables invlolved
mark indexes unusable
alter session set skip_unusable_indexes=true
load data using insert append
rebuild indexes unrecoverable
turn logging back on for tables
any comments on this being a reasonable high level approach or not?
Alot depends on how the DB is being used during this load. If you have got users on it then disabling the indexes is really going to p*ss them off.
Also, you skip unusable indexes at session level. What about the other sessions accessing the data?
You've not said what the source of the data is. Have you identified that it is the insert portion of the process that is definitely at fault, not the read.
I don't want to sound like I'm teaching your grandmother to suck eggs, but have you tried this out and measured the total time. I've worked on a few systems where the index rebuilds alone would take many hours. It may be that using your method the data would get in quicker, but you would be rebuilding indexes for many hours after...
Also, do you have any triggers on the tables? Can they be diabled and reenabled without affecting the integrity of the data? What happens if they are disabled and a user does somethig that would normally fire the trigger?
If possible I try to leave the schema alone during data loads. My approach is, if the system can't deal with the data loads using "normal" methods, I've probably got the system design wrong, but there are always exceptions ;-)
Last edited by TimHall; 06-21-2005 at 04:00 AM.
Thanks for the reply Tim,
Originally Posted by TimHall
We're planning to have the system "to ourselves" for this process so there should be no other sessions accessing the data (its a one off data conversion). I've yet to see the breakdown of timings of the last attempt (80 hrs!) but was just thinking of a high level process that we could benchmark against the current method. The data is initally loaded into staging tables then inserted into the application tables using direct insert.
OK. So the read should not be a major problem. I was just concerned you might have some queries from inefficient pipelined table functions or something like that.
If you have the system to yourself then you are laughing.
One thing to consider, if you are removing indexes from a number of tables you are going to affect you FK validations. You will need to way up the impact of this compared to write speed improvements. It may be that you could disable the FKs also, but be careful not to load rubbish data or you will spend the rest of your life fixing the FK issues.
I would trace your load process and see where most of the time is being spent. You may be trying to speed up something that doesn't take the greatest amount of time.
If you've got alot of PL/SQL running then the profiler is a great way to see what lines of code are taking the longest to complete. See:
As Jeff says, tracing the SQL is a very good idea also. This might help:
Have you considered, splitting the flat file into multiple files, and load them in parallel?
How big is the flat file?
Use direct path option of SQL LOADER, and recreate the indexes in parallel later. This approach works faster than any other method.
Click Here to Expand Forum to Full Width