I am trying to see what would be the most effective way to write a query in a web based search engine.
For example if I enter several conditions in the search to filter the results and there are 100 results and each result html page returns 10 results
the query would be something like
and rownum <= 10
if the userr who enter the query goes to next page link
then another query would occur
and rownum <= 20
if the user goes to last page straight away it would be
and rownum <= 100
Now this would be one way and the other one would just fetch all 100 results and stored them in cache (instead of storing results from 10 queries)
which one would be more effective? one query per page or one query for the whole result set? Taking in consideration of buffer cache and shared pool...
Well the first option in my opinion would be effective if the user only wants to see the first 20 results so here we would save buffer cache, but it would be ineffective if the user wants to see all then we would be wasting biuffer cache for every query
(and at the end all 100 results would be fetched anyway, or Oracle would be so intelligent that it will use the data previously gathered with previous queries in buffer cache and just fetch the last 10 rows and append to the previous 90???).
Now the second option would be effective if the user wants to see all the results but ineffective if the user only browses the first 20 results. Anyone has a better soltuion?
Note: Has anyone used Intermedia text and integrate it with a Search engine? How´s the perfomance?
[Edited by pando on 01-31-2001 at 08:47 AM]
I believe that if you build cache on your own you can always do better because you have more control.
However, instead of building everything yourself, may be you can try to use some functionality of marketed products, for example, I heard Oracle9 App Server has good cache.
if returning to your sample, then one thing is not good:
to fetch 101 row you need to refetch 100 rows. If even Oracle minimize the load by using internal cache, the 100 fetches stil are there.
So it is better to use cursor cache: open cursor and KEEP IT OPENED, untill you need to reuse the cursor. (In Oracle terms, its is "cursors shared pool"). Then, you must keep cursor ID in your page. When you request 101 row from a cursor that is aged out from pool, you can reopen the cursor and rewind it to 101 record. However, if user request 101 row from a cursor that is not aged out, then you just use this cursor.
Indeed, there are many variables that affect the effectiveness of various solutions. If your users are going to often page to the last page, then caching the entire resultset may be a useful option. However, most applications show statistics that are heavily front-loaded. It's human nature. You are presented the first page, hopefully sorted by estimated usefulness - If you don't find it there, you will probably page down a page - maybe 2, less likely 3, possibly 4, but probably not more than 5. Therefore, caching the entire resultset is usually not advisable. However, it still matters exactly what your users will do as to your absolute-best solution.
Given the fact that the user usage statistics are generally not known until *after* implementation (imagine that), I go for the 'scrolling window' solution - only ever get and display the data you need.
Here is a link to a previous discussion of the subject: