Ahh, the infamous "Snapshot too old - Rooback segment too small" error. While this problem cannot be elimated, it's occurrence can be minimized. You have to remember that Oracle is designed to present a 'read-consistent' view of the data as it existed at the time a query was issued.
Query A: CURSOR my_cur is
select * from million_row_table;
Query B: UPDATE million_row_talbe
set status = 'P',
status_date = SYSDATE
where mrt_pk = mrt_pk_to_change;
While Query A is running, Update B is being executed repeatedly by 15 workers in the Call Center, 2 Case managers and 5 claims processors. When Query A starts it is tagged with a SCN (lets say 10). When Update B is executed and committed each execution is tagged with a different (and ever increasing) SCN. As the cursor fetches records it finds that some records ahve changed and so to be 'read consistent' it has to go to the rollback segments to find the blocks marked by SCN 10. What goes wrong is this: If enough Update B's occur during a long running Query A, it is possible for so many updates to commit that a rollback segment no longer contains the 'snapshot' of the data required by Query A. The segment may not have filled up (it usually hasn't), but since the data contained in the rollback segment has been marked as committed, the extents containing the rollback data are marked as available and get reused by subsequent transactions (Remember, since Query A is a read transaction it neither acquires nor exerts any control over rollback segments).
Of course, your rollback segments can be too small, but that's an easy fix. Just drop and recreate larger rollback segments. But if their not too small, here's a list of things to try (in no particular order) and some of them might help you.
1. Run large reports and batch processes during periods of low user activity.
2. Avoid running large reports at the same time as batch processes are updating the same data objects.
3. Commit less frequently. As mentioned above, try 10,000 row cycles or higher. In our data warehouses (DWxxx), we have 500MB rollback segments. You may be able to run some processes and not commit until all the rows are updated. (Of course, if you are running a read-only transaction, this tip will not help you.)
4. Limit the query to a range of records for any given pass, rather than the whole table.
5. Force long-running queries to avoid re-visiting the same block several times during the query. This can be done by either forcing a full table scan(which many of our large queries already do) or by adding a 'dummy' sort to the query so that all the records are retrieved and sorted, then fetched sequentially from the sorted data.
6. Use a temporary table to hold the data needed for the cursor base query. A CTAS statement (create table A as select from table B) works well for this.
7. Where possible, use SQL based set-processing rather than row-based
PL/SQL processing. This provides a great performance boost.
8. ONE OF THE MOST IMPORTANT: Do not fetch across commits. A fetch across commit occurs when a commit is issued in a loop, but the read cursor remains open and is fetched after the commit.
Good luck and have fun. The best part of being a dba is finding the best fix for a problem.
Dumb question - How do you increase the size of your rollback segments? The scripts we have to create ours (historically) only contain initial and next extent storage options. i.e., there is no size value.
So we have 8 rollback segments created as
create public rollback segment rb1 storage(initial 1M next 1M)
create public rollback segment rb2 storage(initial 1M next 1M)
Another option is to create new, larger rollback seg and then offline your existing ones. This way the job will use the larger, available RBS and there are no permanant changes done to the DB. After the job runs you can return to normal and use the larger RBS for larger jobs, meaning take the large one offline and return the smaller ones online.
That was just something that I wrote up awhile back for my developers. We were having that problem quite a bit with a packaged app combined with inhouse developed reporting (they wanted to combine an oltp with a data warehouse) and this was part of the solution.
As to how to change the size of a rollback segment, you have part of it already. You can change the size of the initial and next extents. In addition you can change the maximum number of extents. That way you can change the size of a rollback segment. I prefer to have one rollback segment to a tablespace rather than all my rollback segments in a single tablespace, that way I can move tablespaces around on different disks to reduce I/O contention. But that's not always practable or the "best fit" solution. But constantly working to improve the world is part of the job of a dba.