I have posted this question before and gotten advice and opinoin on caching tables in SGA.
Here's the picture.
I have an IBM AIX sever with 8GB Memory.
I have ONE ORACLE databas with the following SGA;
Total SGA: 1183438972 bytes
Variable size 116860 bytes
DB buffers 1073741824 bytes
redo buffers 17203 bytes
I need to cache three tables of sizes : 300MB, 500MB and 200MB, 100MB in sizes.
What I understand is the I can increase SGA to 4GB (ie. 50 % total memeory).
If, what would be an idea SGA configuration.
These 4 tables are so read intensive.
To what value should I raise the DB BLOCK BUFFERS so that I can cache these 4 tables without any side effects ?
Do I need to alter the other values given above ??
Thanks in advance,
If its READ intensive, I would go and increase db_block_buffres suffcient enough to hold to these table to avoid I/O. If its DML oriented you need to justify other parts of memory of SGA.
Not sure, Does it make sense ?
The server has 8GB of RAM. How much of this is allocated to the application and how much to the UNIX file cache? You have to remember that apart from SGA you need to keep aside some memry for the PGA components.
After taking out all other component assignment see how much you are left with. I would then assign sufficient memory to shared pool and rest to the buffers. Monitor your buffer quality, SQL pinhit ratio, redo buffer quality and fine tune the memory assignment to SGA components. Eventhough the table is 100MB, the reads will depend on the application. May be you will read 50MB out of the 100MB based on the selection criterion. So leave that table management in SGA to ORACLE rather than pinning the entire 100MB in SGA and thus making other queries suffer.
Hope this helps.
The sum of the table sizes is 1.1Gb (which is about the size of the buffer cache). I suggest you double the buffer cache value.
Caching those tables into SGA (buffer pool) would help only if you accsess those tables exclusively with full table scans. I doubt this is is the case.
If those tables are accesed via indexes (most likely), then you should cache all those indexes too (I doubt that this is feasible if there are couple of indexes on each of those tables). And if this is true I doubt that *all* rows of those tables will be accessed very frequently - usualy some rows are visited far more often than the others. In this case I would simply supply enough buffer cache and let oracle fill it up with those table's blocks eventually.
But I'm wondering what makes you belive you realy nead to cache all those table in whole. Have you checked what is your buffer get hit ratio? Can you let us know this % and the natuer of your database (data warehouse, erp, transactional,...)
ASCII a stupid question, get a stupid ANSI
24 hours in a day .... 24 beer in a case .... coincidence?
Click Here to Expand Forum to Full Width