-
Please suggest a better way
Goal is to archive data from a source table into backup table (structurally identical to corresponding source table) and delete the same data from source table.
Constraints are : (please don't ask WHY)
01. Should be implemented as a PERL script
02. In case the operation fails, it need should be re-startable without dealing with entire data volume again.
03. #02 calls for it being done in steps of small # of rows (say 1000) so that it doesn't encounter rollback/undo space issues.
04. There is no primary key on source table.
05. Source table is neither partitioned nor could be partitioned.
What I have come up with so far is :
Insert data from source table into backup table using :
Code:
insert into BACKUP_TABLE
select *
from (
select *
from SOURCE_TABLE
order by col1
)
where rownum<1001;
If this succeeds, delete "same" data from source table using :
Code:
delete SOURCE_TABLE
where col1 in
(
select col1
from (
select *
from SOURCE_TABLE
order by col1
)
where rownum<1001
)
and rownum<1001;
If this also succeeds, COMMIT.
This seems to work in various test cases I could imagine and create. But, the SQL statement (especially for DELETE) sounds complicated and dumb - well, kind of !!
I'm thinking there should be a better way to do this. May be by using one Oracle's analytical functions ?
So, please suggest me a better way of doing this. Most of the databases where this will get implemented are >=9i.
Expecting an intelligent solution from the forum...
svk
-
why so complicated?
1) create backup_table as select * from source_table
2) truncate source_table
and, with the given information, I would say that any thing else would be nonsense
-
Intelligent solution eh?
Partitioning is a very efficient way of doing this, although it is not a cheap Option. Is possible?
-
Svk, here is the best way as listed in the reverse order of thread # 3-2-1
"What is past is PROLOGUE"
-
SVK's approach may not work correctly since the source table does not have PK.
Your approach shoudl be:
Code:
begin loop
read 1000 rows from the source table and lock them
exit when no row is found
insert into another table
delete the locked rows from source
commit
end loop
Tamil
-
Tamil's got the ultimate solution, nail it down, then copy it.
Just curious, by business rule, is there any chance of the data being updated while you're copying?
Ken
-
Originally Posted by KenEwald
Just curious, by business rule, is there any chance of the data being updated while you're copying?
Not if it's locked, as in the solution.
Jeff Hunter
-
Thanks all. One thing I forgot to mention was the data archival needs to happen based on age of records. There is going to be a timestamp column in the source tables. So, insert-all-into-backup-and-truncate-source is not what is needed.
Partitioning is not an option - at least immediately.
The records that need to get archived will not get updated while being archived. There will be new rows getting created in the source table though.
Tamil / Ken : How do I delete just the locked rows ? And, if I may know, why do you think the rownum based solution will not work ? Any technical reason for that ?
svk
-
Originally Posted by svk
Thanks all.
Tamil / Ken : How do I delete just the locked rows ? And, if I may know, why do you think the rownum based solution will not work ? Any technical reason for that ?
When you do read 1000 rows based on the condition to insert into another table use the same to delete the records.
Records may change the order as stored when using rownum, if that table undergoes DML's in between your archiving. Use primary key column if available or timestamp column would be the right way to use in where conditions.
"What is past is PROLOGUE"
-
The tricky part here does not seem to be the insert-records-to-archive, but the delete-from-original-table.
Seems to me that you could run a single direct path (append) insert into the archive table pretty efficiently for all records where timestamp < some date, possibly with NOLOGGING although you'd want to run a backup afterwards. You could then run a simple loop deleting the rows from the original table 1000 at a time (although I expect you could go higher than that) until no rows are deleted.
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|