[QUOTE][i]Originally posted by chrisrlong [/i]
[B]As for the other questions, you will be best served by an outside utility. There are utilities out there that can collect all your SQL from your code (VB or whatever) and from your packages, procs and functions and generate EXPLAIN PLANS for all of them. This would tell you whether or not your indexes are being used.

Unfortunately, there isn't anything to tell you (easily) which of the 2 approaches you mentioned are better across all your SQL.

I am currently writing one and will (hopefully) be releasing it with a book I am writing, but until then you're on your own. You will have to gather up the SQL that uses those columns in those tables and see, as a whole, which strategy is better. Run them all with the single index and then with the separate indexes and see which is better.

- Chris [/B][/QUOTE]

As Chris mentions, there are a couple of approaches of how to identify which indexes are
used and which are not. I was studying this subject quite extensively some time ago while
I was trying to identify which indexes are totally useless on some overindexed database.
BTW, I think it is much easier to identify a missing index (a phone from an unsatisfied
user will ring sooner or later) than to find one that is siting there but is never used in
access path to the data. I came to the conclusion that there is no perfect way to find
used/unused indexes. The perfect solution would be if Oracle would provide us a
dictionary table with stored explain plans that have been used with selects and DDLs in
the past - something similar as V$SQL_AREA. Maybe we'll be pleasantly surprised with
9i on that field too.....?

Some possible approaches, which all have some serious drawbacks:

1.) Extract all your SQLs from your source code and explain them all. The problem here
is that you could possibly have totally different explain plans if you run them under
different users (same table names/synonyms for different tables,....)
2.) Extract all your SQLs from shared pool periodically (before they can be flushed out)
and analyze them. The same problems as above, you have no idea under which schema
they might have been run.
3.) Trace a representative session or an entire instance for sufficiently long period, tkprof
the trace files and analyze the explain plans. This brings some overhead which might not
be acceptable in an production system
4.) If you want to analyze the usage of one particular index, you might configure your
buffer pool with a small KEEP area, designated only to the inspected index. If after some
time this KEEP BUFFER pool is empty you know this index was never used. This is
viable solution only if you know that only reads are performed on the indexed table, no
writes.
5.) Similarly you can create a separate tablespace and move the index you are inspecting
to that tablespace (no other segments should be inthere), then observe the I/O on that
tablespace's datafile
6.) The approach that seemed the most promising one to me when I was more deep into
this subject (I tried to implement it in combination with 4.)): The fact is that all db blocks
that are read or written goes through the buffer pool and that SYS.V$BH (or more
precisely, SYS.X$BH) provides us with the extensive information about the state of each
block currently in a buffer pool. However all this internal stuff is so complicated and
undocumented (I don't blame Oracle Corp. for this in any way) that I've finally given up.

There are many utilities for all kind of DBA stuff, but I don't know of any one that has
some efficient and exact mechanism for this kind of analysis for used/unused indexes.
Although this seems like a one of a basic requirements this leeds me to believe that all this
is not so trivial and easy to implement after all....