I am in a trouble doing the performace tuning of a table(Oracle 126.96.36.199) having 200 Millions Records. The application which uses the table creates ad-hoc queries(queries which are not defined beforehand). This leads to create individual indexes on each of the fields (b*tree or bitmap) not composite as queries are not known. The range queries and aggregate queries are very slow and takes upto 9 minutes.
The same application is running on Sybase IQ Server 12 having a similar setup of indexes and gives the result back in 2-6 seconds. I tried to look read about the indexes in Sybase, Sybase has two other kind of indexes which are call HG (High Group ) and HNG(High Non Group). These indexes helps in doing aggregate and range queries and birngs the results back in seconds.
Does any body has an idea to how to go about this problem and suggest something to improve the performance ?.
For your situation, I belive you should investigate partitioning and (above all) materialized views.
I don't belive creating separate non-composite indexes for all columns is very efficient way to deal with your situation, unless your ad-hoc queries always reference only one column of that table in a where condition. Remember that oracle can only use one index to access the table row, it can not combine two separate indexes in a data access path. At least you should try to run a tkprof over a trace of a typical session to find out which composite indexes will be most benefitial.
Jurij Modic ASCII a stupid question, get a stupid ANSI
24 hours in a day .... 24 beer in a case .... coincidence?
Analyze your tables and indices. When you analyze, choose the estimate statistics, instead of compute statistics. Also set the CURSOR_SHARING from exact to force. IF your optimizer is choose, I would suggest you to change it to FIRST_ROW. But you would want to leave it as CHOOSE if it were of DWH database.
You can also check the type of queries that had been run on your database from querying the V$SQLAREA. Then decide what to do from that point.