Lee, can you *please* tell me the record size, or at least the fields and types?
Also, how are you measuring the time? Is the index faster simply because you ran it second and the data was cached? You need to use a tool that can measure to less than a second, make sure nobody else is running anything, and run each test multiple times. That is the only way to accurately measure time in queries. Once you do that, what are the times?
As far as whether it *should* use the index or not - the optimizer will use the plan that it *believe's* is the most efficient. That does not mean *always* doing an indexed Nested Loop by a long shot. When > 5% of the table data is retrieved, as a rule of thumb, the table scan is faster. There are, as I alluded in an earlier response, many other reasons why a table scan might be faster as well. There are many more than what I listed as well.
Finally, quick lesson on LRs. While I *fully* advocate the use of LRs for measuring the improvement in performance, they are not equivalent between table scan plans and indexed plans - the table scan plans may have fewer LRs, but they should generally be weighted heavier because the LR/PR ratio is much higher for table scans and they therefore also usually take up more cache. Regardless, however, what does 2000 have to do with anything?