append is kinda useless with indexes anyway
p.s to attach something you need to be in the advanced reply section - then its at the bottom
The file is uploaded.
My insert rate in this file is like 1900 inserts a minute - very bad according to the business. If I had a way to minimize the sequential reads should be grate!
Thanks a lot for the help,mj
append if for insert select, for a heavy insert system even append is possible this will make your segment "HUGE"
I think the attached file is not complete.
Originally Posted by mjschwenger
I do not see init parameters.
Sorry about that - the site excepts only up to 100mb - even the file for 5 minutes is bigger.
Hhere init.ora Parameters:
Thanks a lot,mj
Parameter Name Begin value (if different)
----------------------------- --------------------------------- --
control_files /dev/rlv2_ctl1, /dev/rlv2_ctl2, /
dispatchers (PROTOCOL=TCP) (SERVICE=oraperfXD
event 10359 trace name context forever,
This is the part which is missing in the previuos statpack - from the place where it stopped. (I could've zip it...)
Sorry and thanks again,mj
Just following the discussion, very interesting. This is probably beyond me, but have you also done a sql trace for a couple of minutes on one of these app sessions? Are there any triggers, or too many indexes involved on the inserted table? Are the inserts the only (or main) activity going on in the database?
"False data can act only as a distraction. Therefore. I shall refuse to perceive you." - Bomb #20
The situation is also way above me - I have never seen Oracle behaves this way... For me this was always the most scalable RDBMS!
I have traces, of course. Bellow is the summary of the last one done for a single app instance. No triggers, no FKs, a couple of indexes on the big insert tables as every session also selects from them.
Both selects and inserts are intensive. It'a heavy OLTP (huge insert rate, small multiple selects, etc.) but imagine that the Db is treated as a huge repository - for all records inserted and evaluated. Every new records is compared towards the old entries. If existsthen the re-evaluation is performed, If not - it is inserted as new.
Thanks a lot,mj
Have a couple of questions for you:
Quoting from your previous reply:
"a couple of indexes on the big insert tables as every session also selects from them"
- What type of indexes are these? Bitmap? B*tree?
- Are they unique indexes?
"Every new records is compared towards the old entries. If exists then the re-evaluation is performed, If not - it is inserted as new"
- The queries submitted by the application for comparison, do they do index range scans or unique scans?
"a couple of indexes on the big insert tables"
What values get to these indexes? Are they monotonic values ? Values that are consistently increasing and never decreasing ?
I went through your statspack report.
My observations are given below:
1. db_block_size = 4KB. I feel 4KB is very small. This value might be good 10 years ago when we had slow performance UNIX servers. But today, most of the hardwires are well designed to take care of 8KB block size, particularly if you use Veritas Volume Manager or IBM volume manager ( I have been using both as well as IBM servers). My suggestion to use 8 KB block size. Do not set it to more than 8 KB on IBM server, you will see worst performance (this is my personal experience).
2. The log_buffer is set to 200MB. This is huge. I have never seen a database with such a huge value since 1987. Since, 1.73 MB redo is generated per second; I would suggest you could decrease the log_buffer value to 4 MB.
3. log_parallelism default value in 10g is 2. You can increase to 8 or 12 so that different concurrent sessions would start using different log buffer. This will increase transaction through put. The transaction per second is 34.29 from your statspack report. I would expect this should be around 1000 on a well-tuned / configured IBM P series server with 8 CPUs. I had a similar experience what you are doing now 2 years ago. There were 120 concurrent sessions (through MQ series) pumped data into 400 GB big database. I am still using log_parallelism 8 in my 9i production server.
4. Raw device and File system. I believe you are using RAW devices (the data file name shows that). Did you enable async_io in your server? This is very important in IBM server with AIX OS. Alternatively, you can experiment FILE SYSTEM for oracle data files.
5. Stop Using MTS. There is no need to configure MTS because memory is cheap today. You can add memory to your server any time whenever you want. MTS will definitely slow down your transaction through put.
6. If the application is running on the same DB server, then do not use service name for connecting to Database. Alternatively, if the data is sent from the app server, then tune tnsnames.ora file. Consider adding SDU/TDU parameters.
7. Consider disabling PGA. May be for your workload, UGA may work well.
8. Divide the buffer cache into 2 parts – keep and default. Use “keep” for indexes and”default” for table data.
9. As I said earlier, I still believe PCTFREE, FREELISTS and INITRANS play major role for insert operation. You need to increase slowly those values for each table and indexes (FREELIST is not used in indexes even if it is set).
Where are you located?
I am expecting some inputs from Modic, Dapi & Slimdave. Guys, are you there?
Last edited by tamilselvan; 01-14-2006 at 10:02 AM.
Click Here to Expand Forum to Full Width