Goal is to archive data from a source table into backup table (structurally identical to corresponding source table) and delete the same data from source table.
Constraints are : (please don't ask WHY)
01. Should be implemented as a PERL script
02. In case the operation fails, it need should be re-startable without dealing with entire data volume again.
03. #02 calls for it being done in steps of small # of rows (say 1000) so that it doesn't encounter rollback/undo space issues.
04. There is no primary key on source table.
05. Source table is neither partitioned nor could be partitioned.
What I have come up with so far is :
Insert data from source table into backup table using :
Code:
insert into BACKUP_TABLE
select *
from (
select *
from SOURCE_TABLE
order by col1
)
where rownum<1001;
If this succeeds, delete "same" data from source table using :
Code:
delete SOURCE_TABLE
where col1 in
(
select col1
from (
select *
from SOURCE_TABLE
order by col1
)
where rownum<1001
)
and rownum<1001;
If this also succeeds, COMMIT.
This seems to work in various test cases I could imagine and create. But, the SQL statement (especially for DELETE) sounds complicated and dumb - well, kind of !!
I'm thinking there should be a better way to do this. May be by using one Oracle's analytical functions ?
So, please suggest me a better way of doing this. Most of the databases where this will get implemented are >=9i.
Expecting an intelligent solution from the forum...