I wouldn't recomend this sollution. It is reasonable to expect that (even though each cursor is processing the same number of rows) some cursors would finish the works faster then others, hence commiting their work while other cursors are still fetching rows. Even though cursors are not interfering with each other in the apect of the rows processed, they are most certanly interfering with each other in the aspect of blocks they process. In other words, although each cursor is processing their own distinct rows, many cursors are changing the same blocks where those rows resides. If there is many rows (hence many cursors) there is a chance that some blocks in rollback segments, that are not needed any more by the first closed cursor, are owerwritten by another, still open cursor, thus owerwritingthe data needed for a read consistent view of some other still working cursor.
Originally posted by tamilselvan
The alternate solution is to open multiple cursors for different set of rows from the same table probably using primary key column in the where clause. For example, cursor c1 may be opened for rows 1 to 100000, cursor c2 100001 to 200000, etc....and commit the updated rows for every 10000 rows.