DBAsupport.com Forums - Powered by vBulletin
Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 29

Thread: best way to delete large number of rows

  1. #11
    Join Date
    Dec 2002
    Location
    Bangalore ( India )
    Posts
    2,434
    Jmodic:

    Thas even better idea

    Abhay.
    funky...

    "I Dont Want To Follow A Path, I would Rather Go Where There Is No Path And Leave A Trail."

    "Ego is the worst thing many have, try to overcome it & you will be the best, if not good, person on this earth"

  2. #12
    Join Date
    Nov 2002
    Location
    Geneva Switzerland
    Posts
    3,142
    Hey Jurij, that certainly is elegant. I've seen several solutions to this problem and yours must have the fewest key-strokes.

    However (sorry!) I'm not sure it's the quickest. If it's only being done once, perhaps that's no big deal . . . it always depends! But since we are deleting 1mio records out of 2mio, I suspect you'll do a Full Table Scan (which will include reading the space occupied by the deleted data) most times you go through the loop. I would try PL/SQL, a cursor to select the rows to delete, set up a counter and IF MOD(counter, 50000) = 0 THEN COMMIT; END IF; not forgetting a COMMIT outside the loop. This should read each block only once.
    "The power of instruction is seldom of much efficacy except in those happy dispositions where it is almost superfluous" - Gibbon, quoted by R.P.Feynman

  3. #13
    Join Date
    Aug 2002
    Location
    Atlanta
    Posts
    1,187
    rebuild your table, bulk deletes can kill performance,

    you have some good suggestions but i highly recommend you rebuild afterwards to reset the high water mark.

    here's a simple example of what can happen......

    REM ==================================================
    REM Step 1
    REM
    REM Turn timing on, create a sample table
    REM and insert some data
    REM ==================================================


    SQL> SET TIMING ON

    SQL> CREATE TABLE my_table
    2 AS SELECT * FROM ALL_OBJECTS;

    Table created.

    Elapsed: 00:00:19.02
    SQL> INSERT INTO my_table SELECT * FROM my_table;

    26895 rows created.

    Elapsed: 00:00:03.00
    SQL> /

    53790 rows created.

    Elapsed: 00:00:04.07
    SQL> /

    107580 rows created.

    Elapsed: 00:00:09.05
    SQL> /

    215160 rows created.

    Elapsed: 00:00:18.00
    SQL> COMMIT
    2 ;

    Commit complete.
    Elapsed: 00:00:00.00


    REM ==================================================
    REM Step 2
    REM
    REM See how long it takes to count all the rows
    REM ==================================================


    SQL> SELECT COUNT(*) FROM my_table;

    COUNT(*)
    ----------
    430320

    Elapsed: 00:00:12.07


    REM ==================================================
    REM Step 3
    REM
    REM Delete all the hows in the table.
    REM This does not reset the high water mark
    REM ==================================================


    SQL> DELETE FROM my_table;

    430320 rows deleted.

    Elapsed: 00:01:58.07


    REM ==================================================
    REM Step 4
    REM
    REM Now that there is no data, how long does it take
    REM to determine there are zero rows
    REM ==================================================

    SQL> SELECT COUNT(*) FROM my_table;

    COUNT(*)
    ----------
    0

    Elapsed: 00:00:04.00


    REM ==================================================
    REM Step 5
    REM
    REM Now truncate the table and count all the rows. This
    REM action resets the high water mark.
    REM ==================================================


    SQL> TRUNCATE TABLE my_table;

    Table truncated.

    Elapsed: 00:00:00.07

    SQL> SELECT COUNT(*) FROM my_table;

    COUNT(*)
    ----------
    0

    Elapsed: 00:00:00.00
    SQL> spool off
    I'm stmontgo and I approve of this message

  4. #14
    Join Date
    Dec 2000
    Location
    Ljubljana, Slovenia
    Posts
    4,439
    Originally posted by DaPi
    However (sorry!) I'm not sure it's the quickest. If it's only being done once, perhaps that's no big deal . . . it always depends! But since we are deleting 1mio records out of 2mio, I suspect you'll do a Full Table Scan (which will include reading the space occupied by the deleted data) most times you go through the loop.
    Yes, I agree. If the delete is done based on the full table scan, then each loop iteration would perform a new FTS, rereading allready deleted data blocks. This surely wouldn't be the fastest possible sollution. If on the other hand te delete would be performed based on the index access, the performance would be the same as if you'd delete all rows in one go. But then again, it wouldn't be too clever to delete such a large portion of the rows based on the index range scan. BTW, this kind of the batch-delete inside a loop was not my suggestion, I merely "polished" abhaysk proposed code.

    I would try PL/SQL, a cursor to select the rows to delete, set up a counter and IF MOD(counter, 50000) = 0 THEN COMMIT; END IF; not forgetting a COMMIT outside the loop. This should read each block only once.
    Oh, don't try that! You'll get ORA-1555 sooner or later!
    Jurij Modic
    ASCII a stupid question, get a stupid ANSI
    24 hours in a day .... 24 beer in a case .... coincidence?

  5. #15
    Join Date
    Nov 2002
    Location
    Geneva Switzerland
    Posts
    3,142
    "ORA-1555" ah yes . . . . but I have had this work for me . . . . is that because it never needs anything from any RBS released by the commit?
    "The power of instruction is seldom of much efficacy except in those happy dispositions where it is almost superfluous" - Gibbon, quoted by R.P.Feynman

  6. #16
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    probably creating a new table with direct insert is the fastest way but of course if you can afford downtime

    I once had to deal with a delete which took 2 weeks heh, 60 column table with 42 columns indexed and 0 downtime allowed, throwed 10 ORA-01555 during delete process (controlled with exception which re-started the process from stopping point)

  7. #17
    Join Date
    Apr 2002
    Location
    Germany.Laudenbach
    Posts
    448
    Originally posted by pando
    probably creating a new table with direct insert is the fastest way but of course if you can afford downtime

    Use materialized View with View-logs instead (of the table) then you can do it! without downtime.
    Orca

  8. #18
    Join Date
    Dec 2000
    Location
    Ljubljana, Slovenia
    Posts
    4,439
    Originally posted by DaPi
    "ORA-1555" ah yes . . . . but I have had this work for me . . . . is that because it never needs anything from any RBS released by the commit?
    Well, given it a second thought, I guess the probability of getting ORA-1555 with your method is actually very close to 0, provided that full table scan will be executed by your cursor (if the rows are accessed by using index then you are bound to get "snapshot too old", that's for sure)! Although you will be performing "fetch accross commit" you won't get ORA-1555 because each block will be read only once, so there will be no need for rollback data. The only exceptions to this rule would be the blocks in which the reading/deletion of the rows will be interupted by the COMMITs. So when the deletion of the rows will continue after the commit, the block's read consystent immage will be required from the rollback segment. But there will be no problem of obtaining them because that block was only recently written to the rollback, so there will probably be not enough time for it to be overwritten, hence very slight chances of getting ORA-1555.
    Jurij Modic
    ASCII a stupid question, get a stupid ANSI
    24 hours in a day .... 24 beer in a case .... coincidence?

  9. #19
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    Originally posted by Orca777
    Use materialized View with View-logs instead (of the table) then you can do it! without downtime.
    Orca
    it can be done but there is still downtime though

  10. #20
    Join Date
    Apr 2001
    Posts
    257
    If I use PL/SQL to commit for every n rows deleted, is there a significant difference between n=1 and n=1000?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width