well the reason I am asking these questions is... several weeks ago a few users were getting oracle errors stating that thier client could not access somenumber of bytes to create an oracle connection( it was an ora-###) . I looked it up and it said to increase the shared pool size( which was set at 18000000) , after monitoring the free memory for about a week( with the query "select NAME "PARAMETER",
to_char(BYTES, '999,999,999,999') "VALUE"
where name = 'free memory';") I found that free memory was usualy between 3 MB and as low as 100K I increased it to shared_pool_size = 72000000. But a couple clients have gotten the error again and as I check free memory again I have found that that it is varying from 19mb free to less then 1 mb free. I would have thought that increasing the HARED POOL_SIZE by such a great amount would have kept me from seeing this error again for a loooong time. From looking at the other post I am wondering if I need to also increase my db_block_buffers which is currently set at 6400 .
Also the app that is getting the error is written in Access, and we also have several VB apps and a 2 C++ apps.
check the code that runs in the SGA. It looks loke the query had been poorly written. Other things are that you could pin the freq. used objects in the sga and reduce the reloads. Are the tables they are using is analyzed, if not analyze then and check for chained rows and etc. Run an audit on the perticular schema and check the audit trail for more details.
There is no relationship between db_block_buffers and shared_pool other than they both live in the SGA.
db_block_Buffers controls the amount of memory you will use for data caching. For example, an update statement. In order to evaluate whether you need to increase your db_block_buffers, you should examine your hit ratio:
select (1-(sum(decode(name,'physical reads',value,0)) /
(sum(decode(name,'db block gets',value,0))+
sum(decode(name,'consistent gets',value,0))))) * 100 "buff hit"
The hit ratio should be above 98%.
shared_pool controls the amount of memory you will use for other operations; library cache, object cache, etc.
The answer to your question NO2 is Yes. Here the 'free memory' refers to shared_pool.
I have similar problem with my database too. I have to do alter system flush shared_pool almost everyday. Alternatives are to increase shared_pool, but it constrains by the server memory. Since we don't have enough memory on server, I have to flush memory.Of course, you have to tune your sql statements make it as optimal and efficient as possible.
Could you please post your initialization file and
how much memory your machine has?
Also run something like:
select table_name, num_rows, blocks
where owner not in ('SYS','SYSTEM')
You might want to consider using recycle and keep
pools as well. Is this an OLTP database or a DSS
What is your db_block_size? The total size of your
SGA is db_block_size * db_block_buffers. What is
your current hit rate for your caches?
You may want to run a bstat/estat during times of high
activity for a couple of hours and examine the results.
There are also some Oracle parameters you can specifiy
that will give you some extended statistics to determine
the best size for db_block_buffers (DB_BLOCK_LRU_STATISTICS and DB_BLOCK_LRU_EXTENDED_STATISTICS). Make sure that you have the parameter TIMED_STATISTICS = TRUE.
Senior Database Administrator