I read through the posts really quick so I someone else might have suggested this but ....

When you analyzed your tables did you:

1) analyze table <tablename> estimate statistics...
2) analyze table compute statistics for all indexed columns
3) analyze index <index name> compute statistics;

I usually do an compute unless the table is just too large to make this feasible (like fact tables in a DSS environment). I also like to do a compute on all indexed columns as well - this will create histograms on the columns that are indexed which will help the optimizer out if your values are skewed (which may be your problem) and I ALWAYS do a compute on indexes - that usually runs pretty quickly even for large tables.

If your column, for example has a cardinality (distinct values) of say 3 (male, female, other) and the selectivity of each value is:

70% Male
29% Female
1% Other

You will not get an index scan if you don't create a histogram for the column. This is because the optimizer, in the absence of column statistics, assumes an equal distribution of data. Which means it will think that 33.33% of the values equate to Male, 33.33% Female and 33.33% Other. Obviously if this was truly the case, you would rather do a full scan and avoid the index because you're doing extra processing for no reason. When you create a histogram, the optimizer should (note should...not will - I have a problem w/ a very selective index w/ fresh stats and Oracle still hasn't figured out the problem) see that your constraint is highly selective and use the index.

Also, be aware that even with statistics there are a number of other parameters that can influence the optimizers decision including parameters like db_file_multiblock_read_count, among a host of others... This is outlined in the performance tuning and, I believe, the Oracle Administration manuals.