We had a situation last night when oracle CPU usage hit 100% and pretty much stayed there, we ended up shutting the database down and using a alternative database (we are lucky enough to have 2 live databases).

Initial investigations seem to indicate that the query executions plans were to blame and the plans the optimiser was choosing were very inefficient (very high consistent gets). I say this because queries that are normally sub second where taking up to 10-15 seconds and the QEP's were completely wrong....the overall effect of several hundred sessions all hitting the database with bad QEPs was a flat lining box. I have since analysed the database (estimate statistics 50% sample size) and the same queries now run fine.

The statistics on the tables in question were about 4 weeks old (this is as old as we've had them as they are normally refreshed every week) and I'm 99.99% certain that nothing else ran against the system at the time (no large updates/deletes/new indexes/etc..)

My question is....
Has anyone else seen similar behaviour where QEP's suddenly changed?
Is there a threshold or something that Oracle uses to mark the statistics and stale after so many days?