On parallel_max_servers, Weblogic and Memory Leak
We have an existing three node RAC that is being access thru Weblogic Apps Server. Initially the parallel_max_server on the database is set to the default value which is 5. During that time we are getting lots of memory leak and out of memory error from the weblogic server. Then one time we got an ORA-600 error and we were suggested to increase the parallel_max_server. We increased it to 20. We observed that the out of memory error we are getting from weblogic became less, so we decided to further increase the value of the parameter and just like what we expected the the memory problem we are getting from weblogic went away.
can anybody provide me some explanation from the standpoint of Weblogic.
You need to include version #'s, both for RAC and WL, also
what is the first argument from the ora-600
Is the web server on a seperate server from the DB?
There is a bug with parallel query and rac
I'm stmontgo and I approve of this message
I think you miss the point, What I am asking is what is the relation between the oracle parameter parallel_max_server to the Weblogic memory leak, such that when we increase the parallel_max_server the leak became less and less.
We are using 126.96.36.199 and WL 6
It's to do with the bugs they had in their coding. As DBAs we can not conclude anything from the solution. It's all about their C/C++ coding and they bugs in them. Just blindly follow what they say in the case of ora - 600 or for any other bugs in that case.
Last edited by nagarjuna; 04-30-2004 at 09:38 PM.
Click Here to Expand Forum to Full Width