DBAsupport.com Forums - Powered by vBulletin
Page 1 of 2 12 LastLast
Results 1 to 10 of 17

Thread: Snapshot too old with UNDO Retention

  1. #1
    Join Date
    Jul 2003
    Location
    Cincinnati
    Posts
    68

    Snapshot too old with UNDO Retention

    DBAs:

    Oracle Version 9204; Linux Env.
    UNDO TBS is 10GB
    DB has 10 Undo segments (max trans per rbs is 5)
    Max RBS Size on any UNDO is not more than 200MB
    Highest v$rollstat.HWMSIZE is 936501248 (900 MB)

    Now every night in the batch process (about 60 scripts) we run queries to load into a shadow table through ETL and then delete and insert logic for the main table. The query takes 3 minutes before we get ORA-1555 errors. Not able to understand in what circumstance we get these errors. We had between 8 and 25 SQLs that were running when this failed.

    We have been tuning the UNDO RETENTION but i feel that we are in wrong direction. Debated whether we need to increase RB segments, but not sure yet since when this failed, we had only 25 transactions that were processing. Another thing to add is that the shadow table has 101776 rows in yetserday's batch process, when this failed and the master table has 105 million rows.

    On the segment that failed, RBS size is 12MB and HWM is 427 MB

    SQL> --SPOOL d_payment_tran.lst
    SQL> DELETE FROM CARL.D_PAYMENT_TRAN A
    2 WHERE EXISTS (SELECT 'X' FROM CARL_SHADOW.D_PAYMENT_TRAN B
    3 WHERE B.LOAN_NUMBER = A.LOAN_NUMBER
    4 AND B.PMT_TRANSACTION_DATE = A.PMT_TRANSACTION_DATE
    5 AND B.PMT_SEQ_NO = A.PMT_SEQ_NO);
    DELETE FROM CARL.D_PAYMENT_TRAN A
    *
    ERROR at line 1:
    ORA-01555: snapshot too old: rollback segment number 5 with name "_SYSSMU5$"
    too small


    Elapsed: 00:02:35.54
    SQL> COMMIT;

    Commit complete.

    Elapsed: 00:00:00.00
    SQL> -------------INSERT: D_PAYMENT_TRAN------------------------
    SQL> INSERT INTO CARL.D_PAYMENT_TRAN
    2 (
    3 LOAN_NUMBER,
    4 PMT_TRANSACTION_DATE,
    5 PMT_SEQ_NO,
    6 PMT_TOTAL_AMOUNT,
    7 PMT_TRANSACTION_CODE,
    8 PMT_PRINCIPAL_AMOUNT,
    9 PMT_INTEREST_AMOUNT,
    10 PMT_FEE_AMOUNT,
    11 PMT_A_H_INSURANCE_AMOUNT,
    12 PMT_LIFE_INSURANCE_AMOUNT,
    13 PMT_SUSPENSE_AMOUNT,
    14 PMT_ESCROW_AMOUNT,
    15 PMT_MISC_AMOUNT,
    16 PMT_EFFECTIVE_DATE,
    17 PMT_DUE_DATE
    18 )
    19 SELECT
    20 LOAN_NUMBER,
    21 PMT_TRANSACTION_DATE,
    22 PMT_SEQ_NO,
    23 PMT_TOTAL_AMOUNT,
    24 PMT_TRANSACTION_CODE,
    25 PMT_PRINCIPAL_AMOUNT,
    26 PMT_INTEREST_AMOUNT,
    27 PMT_FEE_AMOUNT,
    28 PMT_A_H_INSURANCE_AMOUNT,
    29 PMT_LIFE_INSURANCE_AMOUNT,
    30 PMT_SUSPENSE_AMOUNT,
    31 PMT_ESCROW_AMOUNT,
    32 PMT_MISC_AMOUNT,
    33 PMT_EFFECTIVE_DATE,
    34 PMT_DUE_DATE
    35 FROM CARL_SHADOW.D_PAYMENT_TRAN A
    36 WHERE NOT EXISTS
    37 (SELECT 'N' FROM CARL.D_PAYMENT_TRAN B
    38 WHERE B.LOAN_NUMBER = A.LOAN_NUMBER
    39 AND B.PMT_TRANSACTION_DATE = A.PMT_TRANSACTION_DATE
    40 AND B.PMT_SEQ_NO = A.PMT_SEQ_NO);

    101736 rows created.

    Can anyone suggest what could be the reason?

    Thx

  2. #2
    Join Date
    Nov 2000
    Location
    Pittsburgh, PA
    Posts
    4,166
    Try creating a PL/SQL collection and then using that for your delete.

  3. #3
    Join Date
    Mar 2004
    Location
    DC,USA
    Posts
    650
    200MB may not be sufficient to run your query?

    > rescript by using more commits.
    > make your delete batch small with commits in between. Vice versa for insert's
    > reschedule the execution time when undo usage is less.
    > Increase the undo size and observe.

  4. #4
    Join Date
    Jul 2003
    Location
    Cincinnati
    Posts
    68
    Thanks for the replies

    dbasan: I am not limiting my RBS to be 200MB. I am stating the values when i query v$rollstat to know that it has not used much and also indicate that it does not require more.

    gandolf989: PL/SQL collection? What do you mean by this? Did you mean to say, it is good to write a PL/SQL block and delete using cursor.

    I still wanted to know, what should have caused the error yesterday night?

    Thx

  5. #5
    Join Date
    Nov 2000
    Location
    Pittsburgh, PA
    Posts
    4,166
    Quote Originally Posted by dbasan
    200MB may not be sufficient to run your query?

    > rescript by using more commits.
    > make your delete batch small with commits in between. Vice versa for insert's
    > reschedule the execution time when undo usage is less.
    > Increase the undo size and observe.
    More commits are not the answer. Snapshot too old is caused by an inability to maintain a read consistent view with a query. Using a collection will prevent the issue from happening.

    Code:
    DECLARE
       TYPE pay_tran_record IS RECORD
          ( LOAN_NUMBER           CARL.D_PAYMENT_TRAN.LOAN_NUMBER%TYPE, 
            PMT_TRANSACTION_DATE  CARL.D_PAYMENT_TRAN.PMT_TRANSACTION_DATE%TYPE, 
            PMT_SEQ_NO            CARL.D_PAYMENT_TRAN.PMT_SEQ_NO%TYPE);
    
       TYPE pay_tran_table_type
         IS TABLE OF pay_tran_record
         INDEX BY BINARY_INTEGER;
    
       pay_tran_table pay_tran_table_type;
    BEGIN
       SELECT LOAN_NUMBER, PMT_TRANSACTION_DATE, PMT_SEQ_NO
         BULK COLLECT INTO pay_tran_table
         FROM CARL_SHADOW.D_PAYMENT_TRAN;
    
       FORALL i IN pay_tran_table.FIRST..pay_tran_table.LAST
            DELETE FROM CARL.D_PAYMENT_TRAN
             WHERE LOAN_NUMBER          = pay_tran_table(i).LOAN_NUMBER
               AND PMT_TRANSACTION_DATE = pay_tran_table(i).PMT_TRANSACTION_DATE
               AND PMT_SEQ_NO           = pay_tran_table(i).PMT_SEQ_NO);
       COMMIT;
    END;
    /

  6. #6
    Join Date
    Mar 2007
    Location
    Ft. Lauderdale, FL
    Posts
    3,555
    Let me add some questions just to keep our minds open...

    Does this issue happens often or was this the first time for this query?
    Does any changes have been made lately to the queries that run at that particular time?
    Can you assign this particular transaction to a large -exclusive RBS?
    Can you replicate the issue in your UAT environment?
    Do you have other queries hitting the same table at the same time?
    Do you have perhaps a trigger in your base table?
    Last edited by PAVB; 04-11-2007 at 01:26 PM.
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.

  7. #7
    Join Date
    Jul 2003
    Location
    Cincinnati
    Posts
    68
    Thanks gandolf989 for the Collection SQL. I shall work on it.

    PAVB: Please see the answers

    Does this issue happens often or was this the first time for this query?
    -- It has been happening since we migrated from SQL Loader to Informatica. Informatica files are loading into staging tables from source much faster.

    Does any changes have been made lately to the queries that run at that particular time? -- No changes have been made and nothing runs at that time

    Can you assign this particular transaction to a large -exclusive RBS?
    -- We did not find a reason to do that. The UNDO is 10Gb and the max any RBS has grown is 400MB

    Can you replicate the issue in your UAT environment? -- Cannot.

    Do you have other queries hitting the same table at the same time?
    -- This batch process runs at 2am in the morning, No users are there.

    Do you have perhaps a trigger in your base table? -- None. This is more like a DATA MART application

  8. #8
    Join Date
    Nov 2006
    Location
    Sofia
    Posts
    630
    We discussed a lot about that here http://www.dbasupport.com/forums/sho...ight=ORA-01555

    Please take a look

    Regards

  9. #9
    Join Date
    Sep 2001
    Location
    Makati, Philippines
    Posts
    857
    Quote Originally Posted by Bore
    We discussed a lot about that here http://www.dbasupport.com/forums/sho...ight=ORA-01555

    Please take a look

    Regards
    There were a lot of RUBBISH remarks in that link you provide Boris.

    Anyway, If I were be given the same problem. Knowing that the SQL statement is infact running in long period of time, why not optimise the SQL itself.
    Even in this part will help:
    --------------
    SELECT 'X' FROM CARL_SHADOW.D_PAYMENT_TRAN B
    3 WHERE B.LOAN_NUMBER = A.LOAN_NUMBER
    4 AND B.PMT_TRANSACTION_DATE = A.PMT_TRANSACTION_DATE
    5 AND B.PMT_SEQ_NO = A.PMT_SEQ_NO
    --------------
    ---------------

  10. #10
    Join Date
    Jul 2003
    Location
    Cincinnati
    Posts
    68
    I think the minute someone sees ORA-1555 they recommend increasing RBS or add more commits or something. Our problem seemed to be more than that. We are not having statistics to prove that we need more segments or increase UNDO retention from 3 hours. If you look at the elapsed time for the DMLs that had failed, it is not processing more than 10 minutes before it fails and with 10GB UNDO and highets undo not growing more than 400M, it is confusing. Shall have to try the collections, but this is a production system that has been going well since few years. We are in a mortgage industry and the processing counts have gone down by 1/2 in the last 3 years when these SQLs were written and had no issues.
    There is something more than this, that i need solution.
    I need to process 60 files and the whole processing takes about 1 hour. I have max transactions per RBS to 5. Having 11RBS, i am thinking if this is where i have concentrate on? Any thoughts on if we need to reduce the UNDO retention since the processing takes only 1 hour. Secondly, how do i get more segments when the space management is AUTO
    Thx

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width