I have build a partitioned table with 193 milion rows and each partition has around 20million rows. If i analyze the table partition by partition and compute statistics, it gives ora-600 : 
unable to exten temp segment in TS temp.
My temp ts is now 4 gig, i started it with 1G.
When increasing the temp TS didn't solve the problem then i did analyze with estimate statistics.
Is it Okey to work with estimate statistics instead of compute statistics.
Is compute statistics give more performance than estimate statitics ?.
What kind of index would give best performance on zip code (varchar2(5)) column in the above table ?.
Yes, this is bug 1307247 which will be fixed in Oracle9i and Oracle 18.104.22.168 These versions/patchsets are not yet available.
The ETA for 22.214.171.124 is the end of February 2001. You would of course, have to upgrade to 8.1.7 before applying this patchset.
...and an additional quote:
This bug can cause a system hang. It did on one of ours instances twice. Oracle has a patch for this bug for 126.96.36.199
I had to use the 188.8.131.52 patch to fix a totally different ORA-600 bug myself. I think ORA-600 is Oracle speak for "We'll fix it later."
It sounds like you are running a DSS application from the size... I doubt 4GB for TEMP is going to be anywhere near enough space...I've got about 40GB allocated to mine so far. It does depend on the application and what you are doing but 4GB seems like a small amount if you are running really large queries. Have you tuned your sort_area_size and TEMP tablespace? Also make sure that you create the temporary tablespace as type TEMP so Oracle will dynamically allocate and deallocate the space. Finally there is an issue with PMON not keeping up with cleanup. There is a way to force PMON to work a little harder to clean the space up - it is described on Steve Adams (author of OReilly's "Oracle 8i Internal Services for Waits, Locks, and Memory) Oracle internals site and a script is provided.
Finally ESTIMATE STATISTICS at 50% will actually do a compute statistics because the overhead isn't much different. I believe this happens at anything over 30% (it might be 40%, I can't remember, I'll go back and look it up later) but I know positively that 50% will just give you the same issue.
Senior Database Administrator