I wonder how Oracle manage internally the buffer cache in following situation
The buffer cache is small, so small that if someone do a
select * from table_name
table which has 100 million rows all the data wont fit in the buffer cache, how Oracle manage this? Letīs put some figures, the buffer cache can store data blocks of 10 million rows of this table, will Oracle first return 10 million rows then remove them from the buffer cache then read from disk and put another 10 million rows into buffer cache and remove again etc until all 100 million rows have been fetched and returned to user? i.e 10 fetches instead of one
Oracle reads a set of blocks ( i think using db_file_multiblock_read_count ) and puts the data in the db buffer cache.
Next oracle reads then next set of blocks and puts them in the db buffer cache.
Now the blocks in the buffer cache are put on the list of free/available blocks for the buffer cache.( do not remember the exact name )
when you are alone on the db, the buffer cache will fill with the data from the table, but will be replaced by new blocks during the following reads.
when you are not alone on the db, the select will not monopolise the db buffer cache. It will fill it, but when other sessions need db buffer cache blocks, they will use the blocks used by the previous read.
I think this answers your question(Oracle v816, Concepts guide, Oracle intstance->MemoryArchitecture->SystemGlobalArea(SGA)):
"it must copy the data block from a datafile
on disk into a buffer in the cache before accessing the data."
"The first time an Oracle user process requires a particular piece of data, it searches
for the data in the database buffer cache. If the process finds the data already in the
cache (a cache hit), it can read the data directly from memory. If the process cannot
find the data in the cache (a cache miss), it must copy the data block from a datafile
on disk into a buffer in the cache before accessing the data. "
when you are alone on the db, the buffer cache will fill with the data from the table, but will be replaced by new blocks during the following reads. ""
ok, will the data that fills the buffer first send to user then those buffer are overwritten by more data and then send back again to user again then be overwritten again and so on until all data are sent to user? (10 million rows each go)
All data will send in a go to user, i.e all 100 million rows
hm my question is not about cache, nocache or full, my question is how Oracle manage this kind of situation internally. How does Oracle send the data back to user if we are selefcting 100 million rows and the buffer cache cant fit more than 10 million row data, does oracle send 10 million/time or how?