The fact table in those queries are partitioned. As I mentioned before, there are two changes here. 1. I created some bitmap indexes on some foreign key columns. I was planning to enable star transformation. 2. We loaded some historical data over the last a couple of month to the fact table.
Well, you'll definitely be wanting to enable star transformations. Try that and see what happens.
What column are you partitioning on, and by what method? If it is by SERVICE_CASE_PROCESS_DATE_KEY then I doubt that you're getting the partition pruning that you deserve.
Is this query generated by some tool or other? I'm intrigued by the complex in-line view you have in there.
The table is partitioned range-hash at model_year and vehicle_key(subparition).
I am going to implement star transformation after this performance issue get resolved. I tried enable star transformation at session level, and then check the explain plan. The plan was much better. But now, because the entire database is doing very bad, so it didn't make too much difference in response time.
Did you set optimizer_mode = all_rows while you enabled star transformation at your database? What would be the best for optimizer_mode for a database warehouse environment? Currently, our optimizer_mode = choose.
In retrospect, it would have been a good idea to store execution plans for queries while they were running well, for comparison when things go badly. Not too late to start n that now though.
When you said store execution plans for queries whilst they are running well, are you refering to the use of PLAN_TABLE for all queries against the database for the application? How on earth would you go about collecting that? Sounds like a huge project!! If you know some sneaky trick to make said collection easier, i'd be very happy to hear it
...I assume, you aren't talking about using stored outlines...
I shouldn't think it necessary to do all of them, just a good representative sample. You're hopefully not using bind variables so the same query with different values could yield different execution plans anyway.
commonly executed reports would be a good place to start, along with the longer running reports. You might be able to set tracing on at the instance level and capture most of them, but then if you have your front-end metadata in the same database then you'll get a load that you're not so interested in as well. Watch your disk space if you go down that route.
What could we do with 'library cache lock'?
It seems that's the reason for our database slow down. What's the reason for 'library chache lock?
Taken from the manuals:
Library Cache Locks
When a database object, such as a table, view, procedure, package, or index, is referenced during the parsing of a SQL, DML, DDL, PL/SQL, or Java statement, the process that parses the statement acquires a library cache lock. In Oracle9i, the lock is held only until the parse or compilation completes, or, for the duration of the parse call.
So my guess is you aren't using bind variables, so you're spending alot more time hard parsing than you should be.
Use a 10046 trace to get the wait events, not v$session_wait.
I really don't think that you want bind variables here. You want the optimizer to be sensitive to column histograms in the fact table (although the heavy use here of synthetic keys may be making that impossible). (Yes, I know. Ironic, huh?)