Delete with loop and commit
DBAsupport.com Forums - Powered by vBulletin
Results 1 to 9 of 9

Thread: Delete with loop and commit

  1. #1
    Join Date
    May 2003
    Posts
    46

    Delete with loop and commit

    Any tuning suggestion for this query? We run this every day to delete around 2 million rows(it is 10G)

    BEGIN
    LOOP

    delete from table_name where
    ntimestamp# (less than)trunc(sysdate-3)+6/24 and rownum <= 20000;

    exit when SQL%rowcount = 0;

    commit;

    END LOOP;
    commit;
    END;
    /
    Note:When I put < it is not taking...that's why typed as 'less than' in the above

  2. #2
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    Any tuning suggestion for this query? We run this every day to delete around 2 million rows(it is 10G)
    How many rows are in the table?

  3. #3
    Join Date
    Nov 2006
    Location
    Sofia
    Posts
    630
    Partitioning on ntimestamp# and drop partition instead of delete is the riht one
    Also setting the parameter commit_write to batch,nowait AT SESSION LEVEL could help
    PLS NOTE:
    That can be dangerous. Read carefull about this parameter before using it

    Regards
    Boris

  4. #4
    Join Date
    Nov 2000
    Location
    Pittsburgh, PA
    Posts
    3,999
    This question has been answered before.

    Try looking at http://www.dbasupport.com/forums/sho...ete+collection

    Quote Originally Posted by gandolf989
    You can also try this:

    Code:
    SET SERVEROUTPUT ON
    
    BEGIN
       DBMS_OUTPUT.ENABLE(1000000);
       LOOP
          DELETE table_name
           WHERE DATE='01-JAN-2002'
             AND rownum < 10001;
          DBMS_OUTPUT.PUT_LINE( SQL%ROWCOUNT || ' rows deleted!');
          EXIT WHEN SQL%ROWCOUNT = 0;
          COMMIT;
       END LOOP;
       COMMIT;
    END;
    /
    It's not likely to be faster than a straight delete,
    but it should work without getting the snapshot too old error message.
    Of course my solution was endorsed by Marist89,

    Quote Originally Posted by marist89
    This is the second best solution I've seen.
    this space intentionally left blank

  5. #5
    Join Date
    May 2003
    Posts
    46
    Quote Originally Posted by tamilselvan
    How many rows are in the table?

    It is 8 to 10 million(keeps adding every day and deleting old data)

  6. #6
    Join Date
    Nov 2000
    Location
    Pittsburgh, PA
    Posts
    3,999
    Quote Originally Posted by Ramg
    It is 8 to 10 million(keeps adding every day and deleting old data)
    If you are adding 8-10 million rows per day and you query by the day added among other things then partitioning by day would be best as you can create a new partition every day and drop the oldest partition everyday. But if you don't query by date, then partitioning by date may not help you. But you can always try partitioning a few ways and seeing which one works best.
    this space intentionally left blank

  7. #7
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    I would try parallel DML if I wanted to delete 2 M rows out of 10 M rows.

  8. #8
    Join Date
    Mar 2007
    Location
    Ft. Lauderdale, FL
    Posts
    3,554
    Bore has my vote here.
    I would certainly partition the table by date, if you are on 10g Oracle will add partitions as needed and purging will take a split second.
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.

  9. #9
    Join Date
    Nov 2000
    Location
    greenwich.ct.us
    Posts
    9,092
    Quote Originally Posted by gandolf989
    This question has been answered before.

    Try looking at http://www.dbasupport.com/forums/sho...ete+collection



    Of course my solution was endorsed by Marist89,
    behind mine, of course...
    Quote Originally Posted by marist89
    You need to externalize the loop from the database. For example, if you need to delete 5 years of data, you can delete day by day with something like:

    Code:
    declare
       start_dt DATE;
    
    begin
       start_dt = to_date('01/01/2000','mm/dd/yyyy');
    
       while start_dt < trunc(sysdate-365) loop
          delete from yourTab where myDate = start_dt;
          commit;
       end loop;
    end;
    Jeff Hunter
    marist89@yahoo.com
    http://marist89.blogspot.com/
    Get Firefox!
    "I pledge to stop eating sharks fin soup and will not do so under any circumstances."

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  



Click Here to Expand Forum to Full Width