DBAsupport.com Forums - Powered by vBulletin
Page 1 of 2 12 LastLast
Results 1 to 10 of 13

Thread: Please suggest a better way

  1. #1
    Join Date
    Jul 2000
    Posts
    521

    Please suggest a better way

    Goal is to archive data from a source table into backup table (structurally identical to corresponding source table) and delete the same data from source table.

    Constraints are : (please don't ask WHY)
    01. Should be implemented as a PERL script
    02. In case the operation fails, it need should be re-startable without dealing with entire data volume again.
    03. #02 calls for it being done in steps of small # of rows (say 1000) so that it doesn't encounter rollback/undo space issues.
    04. There is no primary key on source table.
    05. Source table is neither partitioned nor could be partitioned.

    What I have come up with so far is :
    Insert data from source table into backup table using :
    Code:
    insert into BACKUP_TABLE
    select *
      from (
            select *
              from SOURCE_TABLE
             order by col1
           )
     where rownum<1001;
    If this succeeds, delete "same" data from source table using :
    Code:
    delete SOURCE_TABLE
     where col1 in
           (
            select col1
              from (
                    select *
                      from SOURCE_TABLE
                     order by col1
                   )
             where rownum<1001
           )
       and rownum<1001;
    If this also succeeds, COMMIT.

    This seems to work in various test cases I could imagine and create. But, the SQL statement (especially for DELETE) sounds complicated and dumb - well, kind of !!

    I'm thinking there should be a better way to do this. May be by using one Oracle's analytical functions ?

    So, please suggest me a better way of doing this. Most of the databases where this will get implemented are >=9i.

    Expecting an intelligent solution from the forum...
    svk

  2. #2
    Join Date
    Mar 2002
    Posts
    534
    why so complicated?

    1) create backup_table as select * from source_table

    2) truncate source_table

    and, with the given information, I would say that any thing else would be nonsense

  3. #3
    Join Date
    Aug 2002
    Location
    Colorado Springs
    Posts
    5,253
    Intelligent solution eh?

    Partitioning is a very efficient way of doing this, although it is not a cheap Option. Is possible?
    David Aldridge,
    "The Oracle Sponge"

    Senior Manager, Business Intelligence Development
    XM Satellite Radio
    Washington, DC

    Oracle ACE

  4. #4
    Join Date
    Mar 2004
    Location
    DC,USA
    Posts
    650
    Svk, here is the best way as listed in the reverse order of thread # 3-2-1
    "What is past is PROLOGUE"

  5. #5
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    SVK's approach may not work correctly since the source table does not have PK.
    Your approach shoudl be:

    Code:
    begin loop
      read 1000 rows from the source table and lock them 
      exit when no row is found
      insert into another table
      delete the locked rows from source
      commit
    end loop


    Tamil

  6. #6
    Join Date
    Mar 2002
    Location
    Mesa, Arizona
    Posts
    1,204
    Tamil's got the ultimate solution, nail it down, then copy it.

    Just curious, by business rule, is there any chance of the data being updated while you're copying?

    Ken
    "I do not fear computers. I fear the lack of them." Isaac Asimov
    Oracle Scirpts DBA's need

  7. #7
    Join Date
    Nov 2000
    Location
    greenwich.ct.us
    Posts
    9,092
    Quote Originally Posted by KenEwald
    Just curious, by business rule, is there any chance of the data being updated while you're copying?
    Not if it's locked, as in the solution.
    Jeff Hunter

  8. #8
    Join Date
    Jul 2000
    Posts
    521
    Thanks all. One thing I forgot to mention was the data archival needs to happen based on age of records. There is going to be a timestamp column in the source tables. So, insert-all-into-backup-and-truncate-source is not what is needed.

    Partitioning is not an option - at least immediately.

    The records that need to get archived will not get updated while being archived. There will be new rows getting created in the source table though.

    Tamil / Ken : How do I delete just the locked rows ? And, if I may know, why do you think the rownum based solution will not work ? Any technical reason for that ?
    svk

  9. #9
    Join Date
    Mar 2004
    Location
    DC,USA
    Posts
    650
    Quote Originally Posted by svk
    Thanks all.
    Tamil / Ken : How do I delete just the locked rows ? And, if I may know, why do you think the rownum based solution will not work ? Any technical reason for that ?
    When you do read 1000 rows based on the condition to insert into another table use the same to delete the records.

    Records may change the order as stored when using rownum, if that table undergoes DML's in between your archiving. Use primary key column if available or timestamp column would be the right way to use in where conditions.
    "What is past is PROLOGUE"

  10. #10
    Join Date
    Aug 2002
    Location
    Colorado Springs
    Posts
    5,253
    The tricky part here does not seem to be the insert-records-to-archive, but the delete-from-original-table.

    Seems to me that you could run a single direct path (append) insert into the archive table pretty efficiently for all records where timestamp < some date, possibly with NOLOGGING although you'd want to run a backup afterwards. You could then run a simple loop deleting the rows from the original table 1000 at a time (although I expect you could go higher than that) until no rows are deleted.
    David Aldridge,
    "The Oracle Sponge"

    Senior Manager, Business Intelligence Development
    XM Satellite Radio
    Washington, DC

    Oracle ACE

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width