Since this is 10g how about read what does ADDM say? AWR reports? (Ok it similar to statspack but then since this is 10g lets use the features)
have you tried used reverse indexes? it has quite a few limitations, if you can bypass those you might be able to reduce your ITL waits
Hi Tamil, I don't think I can help.
AFAICS, it's stuffed by the one-row-at-a-time approach. The SQL in statspack would seem to show that much of the db sequential read wait time is due to the "does this row exist?" part of the logic.
On top of that, I'm worried by the queries like.
"SELECT MAX(ROLE_ID) FROM SEP_ROLES WHERE DSRC_ACCT_ID = :1"
That smells like disaster for concurrent updates.
"The power of instruction is seldom of much efficacy except in those happy dispositions where it is almost superfluous" - Gibbon, quoted by R.P.Feynman
Thanks a lot, Tamil. I'll go number by number:
#1 - my first try was with 8K pages and I saw more contention - on 8K pages db sequential reads are 4% higher. I can ceratinly switch to 8k again and re test once I have a beter throughtput.
#2 - The smallest log buffer which works for me is 10m - bellow that I have waits on redo requests and redo logs. I'll decrease the buffer log and re tests it with #3. I did not think about it - let me try it.
#4 - I use row devices. According to the networking aio server is properly configured and Oracle suppose to use it - unfortunatelly I do not have the impression that it works right but I do not know how to check. Fileio system is set to aio per IBM doc from september 2005. Before I used setall. What is your experience with the param?
#5 - I'll disable MTS - I did not configure it myself - may some of my colleagues did in the last days...
#6 - the app is very CPU intensive itself and cannot run on the DB server - I have alreadychanged SDU and TDU to 15000(10xMTS-1500)
#7 - you are right - UGA has high numbers in the statpack I saw them - I'll try to use it instead of PGA - one of the most seeing recommendations of ADDM is to increase PGA to 6G which I cannot do...
#8 - I tried to put all indexes and data for "only insert tables" into the recycle buffer pool with the size of 1G - the performance went much down - 12.5% and I stopped the test. I have 12G RAM on the server. Currently 6.8G are allocated to SGA, and I still have contetion on libray and shared pool. I may try to devide the total SGA to 3 parts - 3G for keep BP and to allocate there all "select" indexes, 1.5G for default BP and 2G for recycle - I'll keep there the insert only tables, and 2G for UGA.
#9 - I'll drop the freelists to 10 with 3 groups. In my testing I saw that changing the freelist for indexes helps - it relieves the enqueues-TX - are you sure they are not in use? What about freelist groups for indexes?
All indexes are Btree indexes, nothing is clustered, some of them are unique, but almost all queries are using ranges. A lot of the indexes are already reversed but the contention is still there.
I'm starting another big series of tests tonight and I'll post my results. It'll include compressing of some indexes as well.
Thanks really a lot for the help.mj
Click Here to Expand Forum to Full Width