DBMS_STATS and Locking
When does gathering statistics lock tables and prevent DML?
I'm new at this, so forgive if this is a dumb question?
Never. The Oracle mantra is "Readers don't block writers, writers don't block readers."
DBMS_STATS is a reader.
Now that I've had more time reading other notes/thread/books, I did find something that says when indexes are analyzed, a shared lock is put on the related table and this would prevent DMLs on that table.
Do you concur?
No, sounds wrong to me. Do you have a reference for that?
Towards the bottom of the page where titled "Statistics Locks?"
I think that refers only (if obscurely) to the ANALYZE INDEX ... VALIDATE STRUCTURE command, and since 9.2 (I think?) that also has an ONLINE clause which allows DML. On the face of it I believe that article is wrong in respect of analyzing indexes, and in particular "Preferably avoid all activities during the statistics generation phase" is very misleading.
Thanks for clarifying that. I'll do somemore investigating and not take that article at face value.
So let me ask you this...
I thought that one reason to do estimate instead of compute on really large tables is to avoid long running analysis where, for example, things can be locked up.
So if no locking occurs, what other negative affects of analyzing/getting stats have that would make me do estimates rather than compute?
AskTom says with great conviction that analyze (w/o validate) does NOT lock anything up (see http://asktom.oracle.com/pls/ask/f?p...:2045859310335.
But I still want to know, why not always do compute?
Is the main reason simply the extra load that is put on the database during the analysis (as implied in the above AskTom article)?
If so, then it would be acceptable to gather stats in the middle of the night when least amount of users are impacted. Yes?
Last edited by dbbyleo; 07-27-2006 at 04:51 PM.
It's a matter of trading off the resource usage of the two methods against their relative accuracy. In many cases estimating can give very good results on a very low percentage -- 1-3% for example. The best practice is to run a compute, then store the values you've gathered and compare them against reducing values of the estimate percent until you start getting an unacceptable deviation between them.
You also need to consider whether you want to keep gathering statistics very regularly. You may find that an execution plan changes for the worse at some point due to a change in statistics, so preserving old statistics may also be advisable.
Click Here to Expand Forum to Full Width