I wonder how Oracle manage internally the buffer cache in following situation
The buffer cache is small, so small that if someone do a
select * from table_name
table which has 100 million rows all the data wont fit in the buffer cache, how Oracle manage this? Letīs put some figures, the buffer cache can store data blocks of 10 million rows of this table, will Oracle first return 10 million rows then remove them from the buffer cache then read from disk and put another 10 million rows into buffer cache and remove again etc until all 100 million rows have been fetched and returned to user? i.e 10 fetches instead of one
[Edited by pando on 01-31-2001 at 08:30 AM]
anyone got a point about this :-?
I think this problem can be compared with configuring a recycle buffer pool which is to small to cache the table which is asigned to it.
When performing a full table scan the table does not fit in the recycle pool and I/O is heavely increased.
While oracle builds the result set
yes but how Oracle builds the results is my question, in 10 fetches and return to user 10 times or how?
I think it goes somehow like this.
You do a full table scan.
Oracle reads a set of blocks ( i think using db_file_multiblock_read_count ) and puts the data in the db buffer cache.
Next oracle reads then next set of blocks and puts them in the db buffer cache.
Now the blocks in the buffer cache are put on the list of free/available blocks for the buffer cache.( do not remember the exact name )
when you are alone on the db, the buffer cache will fill with the data from the table, but will be replaced by new blocks during the following reads.
when you are not alone on the db, the select will not monopolise the db buffer cache. It will fill it, but when other sessions need db buffer cache blocks, they will use the blocks used by the previous read.
I think this answers your question(Oracle v816, Concepts guide, Oracle intstance->MemoryArchitecture->SystemGlobalArea(SGA)):
"it must copy the data block from a datafile
on disk into a buffer in the cache before accessing the data."
"The first time an Oracle user process requires a particular piece of data, it searches
for the data in the database buffer cache. If the process finds the data already in the
cache (a cache hit), it can read the data directly from memory. If the process cannot
find the data in the cache (a cache miss), it must copy the data block from a datafile
on disk into a buffer in the cache before accessing the data. "
I think it is not 10 times fetching but more a continious process of reading blocks and processing the result.
when you are alone on the db, the buffer cache will fill with the data from the table, but will be replaced by new blocks during the following reads. ""
ok, will the data that fills the buffer first send to user then those buffer are overwritten by more data and then send back again to user again then be overwritten again and so on until all data are sent to user? (10 million rows each go)
All data will send in a go to user, i.e all 100 million rows
if I would be Oracle programmer, I would have little choise than to:
read data into memory array, i.e buffer
send data from buffer to pipe/socket/etc. (i.e. to user)
The only question is: does it makes sence to search for a free space in a LRU buffer, as it consumes some time (LRU latch contention, search for free space in buffer, etc, etc).
If I would be an Oracle programmer, I would do this processing because of probability that the data in cache may be hit by another user/query.
However, there are hints, that may tell Oracle a little bit how to do the processing of storing data in a chache:
CACHE, NOCACHE, FULL
hm my question is not about cache, nocache or full, my question is how Oracle manage this kind of situation internally. How does Oracle send the data back to user if we are selefcting 100 million rows and the buffer cache cant fit more than 10 million row data, does oracle send 10 million/time or how?
Click Here to Expand Forum to Full Width