-
Through pl sql i am updateing a table for balance from
main transaction table of 28 lac rows. AFTER EVERY UPDATE & INSERT I AM USING COMMIT.
i have 6 rollback segment online with 5 mb init, 5 mb next, 5 min extend
and auto extend option online. after updateing around 5 lac rows it gives
error that ORA-01555 SNAP**** TOO OLD
HOW CAN I SOLVE THE PROBLEM ????
THANKS IN ADVANCE
FROM ANUJ PATHAK
HERO PUCH
INDIA
-
Make sure you use large rollback segment! Check status of your rollback segs.
Sergey.
-
Make new rollbacks with big initial & optimal values
Hisham Nagia
IT Manager For Development
Oracle Consultant - OCP
-
Make sure to have commits in between like every 10000 rows.
-
This is the most tipical situation to get ORA-1555. It is called "fetch accross commit". The irony is, the more frequently you commit on the open curosr, the more chances you have to get ORA-1555 (and the size/number of your rollback segments doesn't make any difference).
You could:
a) Commit less often - if you have enough space in your RB tablesapce, you could perform a commit only after all the rows are processed.
b) Commit periodicaly, but after every commit you have to close and reopen your PL/SQL cursor. If using this method, you should reopen the cursor with such a condition that only unrocessed rows are included in a new cursor. Here is an example with SCOTT.EMP:
DECLARE
dummy NUMBER;
current_empno NUMBER := 0;
CURSOR c1 (p_empno) IS SELECT * FROM emp
ORDER BY empno WHERE empno >= p_empno FOR UPDATE OF sal;;
BEGIN
LOOP
FOR r1 IN c1 (current_empno) LOOP
current_empno := r1.empno;
UPDATE emp SET sal = sal*2 WHERE CURRENT OF c1;
EXIT WHEN c1%ROWCOUNT >= 10; -- commit every 10 rows!
END LOOP;
COMMIT;
-- check if there are any more unprocesed rows
BEGIN
SELECT NULL INTO dummy FROM emp WHERE empno > current_empno;
EXCEPTION
WHEN NO_DATA_FOUND THEN EXIT;
WHEN TOO_MANY_ROWS THEN NULL;
END LOOP;
END;
-
Thanks jmodic for your solution it helped me allot.
What if the error ORA-01555 comes even when
i am not updating the table for which i have opened the cursor.
like cursor for "trn" table of 28 lac rows
based on its fetched row
i am updating another table like "closing_balance"
-
If you are not changing the table on which you have open cursor (are you sure you are not making DML on it through triggers or procedures?), then someone else must have changed some rows while your cursor was opened on it.
In all circumstances, ORA-1555 is caused because the database can't construct read consistent view of the data in a table as of the moment when you opened the cursor.
Here is a possible scenario:
1. In time T1 you opene the cursor on table TABLE1
2. In time T2 transaction TRANS2 change some blocks on TABLE1. The blocks contents before the cahnge (that is, blocks content as it was in time T1) are stored in rollback segment and are protected from being owerwritten. They are not protected because of your cursor's read consistent view, but because transaction TRANS2 might need them in case it need to perform rollback.
3. Until TRANS2 commits your cursor will always get read consistent view of the changed blocks from rollback segment.
4. In time T3 TRANS2 commits. From that moment on the changed blocks in rollback segment are not protected any more, because the transaction which changed them doesn't need them any more. Any other transaction can use this part of rollback segment.
5. In time T4 some other transaction TRANS3 owerwrite those roollback segment's blocks. The original data of the changed blocks from TABLE1 as it was in time T1 no longer exists in the database!
6. In time T5 your cursor tries to fetch next row from TABLE1. If this row is stored in changed blocks, then the sistem can not get the content of this row as it was in the time T1 (even though maybe TRANS2 haven't changed this particular row, it might have changed some other row from this same block that was not even included in a result set of your cursor!). And voila - ORA-1555.
HTH,
Jurij Modic
ASCII a stupid question, get a stupid ANSI
24 hours in a day .... 24 beer in a case .... coincidence?
-
The alternate solution is to open multiple cursors for different set of rows from the same table probably using primary key column in the where clause. For example, cursor c1 may be opened for rows 1 to 100000, cursor c2 100001 to 200000, etc....and commit the updated rows for every 10000 rows.
-
[QUOTE][i]Originally posted by tamilselvan [/i]
[B]The alternate solution is to open multiple cursors for different set of rows from the same table probably using primary key column in the where clause. For example, cursor c1 may be opened for rows 1 to 100000, cursor c2 100001 to 200000, etc....and commit the updated rows for every 10000 rows. [/B][/QUOTE]
I wouldn't recomend this sollution. It is reasonable to expect that (even though each cursor is processing the same number of rows) some cursors would finish the works faster then others, hence commiting their work while other cursors are still fetching rows. Even though cursors are not interfering with each other in the apect of the rows processed, they are most certanly interfering with each other in the aspect of blocks they process. In other words, although each cursor is processing their own distinct rows, many cursors are changing the same blocks where those rows resides. If there is many rows (hence many cursors) there is a chance that some blocks in rollback segments, that are not needed any more by the first closed cursor, are owerwritten by another, still open cursor, thus owerwritingthe data needed for a read consistent view of some other still working cursor.
Remember that read consistency is performed by oracle on the block level, not on the record level.
Jurij Modic
ASCII a stupid question, get a stupid ANSI
24 hours in a day .... 24 beer in a case .... coincidence?
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|