-
Informatica commit interval is 200000. I have a test environment but not enough volume available there.
As the Insert is part of daily job, I have a luxury to test it in production it self. I have planned the below two things for next load.
1. Decrease the commit interval to 50000
2. Drop both the Indexes
The other insert statement which is executing along with the one I am referring, made me think about decreasing the commit interval because of the free buffer waits.
Code:
( :1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17,
:18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32,
:33, :34, :35, :36, :37, :38, :39, :40, :41, :42, :43, :44, :45, :46, :47,
:48, :49, :50, :51, :52, :53, :54, :55, :56, :57, :58, :59, :60, :61, :62,
:63)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1204 73.38 2676.67 44854 9562 243505 77056
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1204 73.38 2676.67 44854 9562 243505 77056
Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: 83
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net more data from client 7224 0.00 0.10
db file sequential read 44852 1.23 1189.41
SQL*Net message to client 1204 0.00 0.00
SQL*Net message from client 1204 0.00 1.12
free buffer waits 11283 0.98 452.76
latch: object queue header operation 6 0.00 0.00
latch free 2 0.00 0.00
log file switch completion 1 0.17 0.17
I started looking into the respective Index TS to see if there is any contention in the underlying data files or not.
Thanks,
Vijay Tummala
Try hard to get what you like OR you will be forced to like what you get.
-
while investigating more on the issue I had, I have come across a nice metalink document about the Recommended Method for Obtaining 10046 trace for Tuning. This may be helpful for some one so sharing the URL.
https://support.oracle.com/CSP/main/...442.1&type=NOT
Thanks,
Vijay Tummala
Try hard to get what you like OR you will be forced to like what you get.
-
If i see a big number in value column of v$segment_stats, does it mean that DB is spending more time in allocating space?
Code:
select OBJECT_NAME,SUBOBJECT_NAME,object_type,statistic_name,statistic#,value from v$segment_statistics where obj# in (
select object_id from dba_objects where object_name='A_IDX') order by object_type,value desc;
OBJECT_NAME SUBOBJECT_ OBJECT_TYPE STATISTIC_NAME STATISTIC# VALUE
------------------------------ ---------- ------------------ ------------------------------ ---------- --------------
A_IDX INDEX space allocated 15 3757047808
A_IDX INDEX space used 14 2555428919
A_IDX INDEX logical reads 0 352393280
A_IDX INDEX db block changes 3 119409200
If i recreate the object with bigger extent size does it help any way?
Oracle 10G R2 and all the table spaces are locally managed.
Thanks,
Vijay Tummala
Try hard to get what you like OR you will be forced to like what you get.
-
If you have that many inserts (using indexes), the stats would make sense/support what's being observed. You're best bet is to disable and rebuild, this isn't any different than a bulk load insert for a warehouse (or a big import having to update indexes along the way).
-
That may not be a feasible solution for me Stecal because most of the tables are more than 125 million and every day 4-6 million data will be loaded into these tables.
what do you think about the below stats? I am tracing a session since last 5 hours which is doing update on a huge table again which has 3 indexes in it. Please note this session is not completed yet.
Code:
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 266920 1.39 2768.78
SQL*Net message to client 240739 0.00 0.33
SQL*Net message from client 240739 1.18 41.97
log file switch completion 4 0.23 0.36
latch: checkpoint queue latch 1 0.00 0.00
log file sync 1 0.00 0.00
latch: object queue header operation 1 0.00 0.00
********************************************************************************
Vijay Tummala
Try hard to get what you like OR you will be forced to like what you get.
-
Hi Stecal,
The update statement I am referring in my previous post is just completed. Please see the tkprof output below and kindly share your taughts'.
Code:
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 296271 3900.76 17125.86 311558 912750 2129808 296271
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 296271 3900.76 17125.86 311558 912750 2129808 296271
Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: 83
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 311558 1.39 3223.30
SQL*Net message to client 296271 0.00 0.41
SQL*Net message from client 296271 1.18 50.56
log file switch completion 4 0.23 0.36
latch: checkpoint queue latch 1 0.00 0.00
log file sync 1 0.00 0.00
latch: object queue header operation 1 0.00 0.00
********************************************************************************
Thanks,
Vijay Tummala
Try hard to get what you like OR you will be forced to like what you get.
-
What OS is this?
You probably have storage problem, your sequential reads are taking around 25ms per read, that is like 8 times more than usual.
Other possibility is you are using filesystem buffer cache and suffering doube buffering effects
Also since you have 10046 trace you can look what segments are causing these waits
show us v$event_histogram output for the sequential reads
-
Hi Pando,
Nice to see you back after long time.
OS is HP-UX B.11.31
Actually, I have this problem with 6 out of 120 Informatica sessions. Out of these 6, five will do insert and one will do update.
The other tables residing in the same TS doesn't have any problem. Please see the out put from v$event_histogram.
Code:
EVENT# EVENT WAIT_TIME_MILLI WAIT_COUNT
---------- ---------------------------------------------------------------- --------------- ----------
116 db file sequential read 1 609165281
116 db file sequential read 2 66522697
116 db file sequential read 4 93727756
116 db file sequential read 8 296013939
116 db file sequential read 16 170110709
116 db file sequential read 32 44205035
116 db file sequential read 64 21820058
116 db file sequential read 128 7420962
116 db file sequential read 256 2801755
116 db file sequential read 512 1270400
116 db file sequential read 1024 383451
116 db file sequential read 2048 91451
116 db file sequential read 4096 2047
116 db file sequential read 8192 128
116 db file sequential read 16384 21
116 db file sequential read 32768 5
Thanks,
Vijay Tummala
Try hard to get what you like OR you will be forced to like what you get.
-
Hi Pando,
missed to update in my previous post that I don't have 10046 trace file. As I don't have control for the application users, I have used dbms_monitor package to enable trace for waits.
Thanks,
Vijay Tummala
Try hard to get what you like OR you will be forced to like what you get.
-
Originally Posted by vnktummala
The update statement I am referring in my previous post is just completed. Please see the tkprof output below and kindly share your taughts'.
Code:
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 296271 3900.76 17125.86 311558 912750 2129808 296271
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 296271 3900.76 17125.86 311558 912750 2129808 296271
Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: 83
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 311558 1.39 3223.30
SQL*Net message to client 296271 0.00 0.41
SQL*Net message from client 296271 1.18 50.56
log file switch completion 4 0.23 0.36
latch: checkpoint queue latch 1 0.00 0.00
log file sync 1 0.00 0.00
latch: object queue header operation 1 0.00 0.00
********************************************************************************
As far as I can see wait events on db_file_sequential_read account for about 20% of the total elapsed time therefore even if you manage to get rid of 100% of this particular wait event - which is not possible - you are looking at a performance gain of about 20%
What are the chances of rewritting this Informatica mapping as a PL/SQL block? - just for testing purposes
I would write the code without using explicit cursors a.k.a. set-based and, I'll eventually use APPEND hint during insert. On the first try I'll commit only once at the end of the job.
Pablo (Paul) Berzukov
Author of Understanding Database Administration available at amazon and other bookstores.
Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|