We have a long running batch job that takes hours to complete. The customer has about 100,000 records in a table. Our application has a stored procedure that goes through each record, performs some computations/validations on various columns and updates certain columns in the table. It performs a "commit" after processing a fixed count of records in the table(to have some control on the size required by the rollback segments). This serial processing of records in the table is causing performance problems.

We are looking to see if we can do some parallel processing to speed up the execution time. Can I simultaneously spawn off a call to the same procedure (probably using dbms_job) with a criteria like:

Execute the procedure on records 1-10000
Execute the procedure on records 10001 - 20000
Execute the procedure on records 90001 - 100000

Will this cause any contentions especially with the procedure needing to do updates to the records in the table (although a different one each time). How about transactional integrity?

If this is possible, should the environment have multiple processors to handle this and actually give a perfromance gain?

Any help will be appreciated.
Thank You.