DBAsupport.com Forums - Powered by vBulletin
Results 1 to 8 of 8

Thread: what factors need to be taken into account for doing bulk deleteBulk Delete

  1. #1
    Join Date
    Sep 2000
    Posts
    103
    First of all, I'd like to thank everyone for helping me in this site.

    I have to do a set of bulk deletes, so could someone tell me what parameters or resources I need to take into account . I am told that normally it takes a lot of time. My database in noarchivelog mode.
    Do I need to create a large rollback segment ?

    pst.

  2. #2
    Join Date
    Jun 2000
    Location
    Chennai/India
    Posts
    90
    Could you pls be more specific in mentioning about "Bulk Deletes" ?
    Rgds
    Ganesh .R
    A Winner never Quits, A Quitter never Wins
    Ganesh .R

  3. #3
    Join Date
    Sep 2000
    Posts
    103
    hi !

    Our database contains information about a couple of websites we host,all the details like visits etc.... So right now I'm trying to delete all the info. pertaining to a particular site.




  4. #4
    Join Date
    Apr 2000
    Location
    Baltimore, MD
    Posts
    759
    Suggestions:
    1. commit after every say, 1000 rows during the delete process. This will not blow out your rbs
    2. perform this operation prefeably during offpeak hours
    3. since the db is running in noarchivelog mode, take a cold backup before you start deleting.

    If the # rows to be deleted is more than 1/2 the # rows in the entire table, then you can create an alternate table, load if with the data that you need to keep from the source table, truncate and/or drop source table and rename the alternate table to the source table.

  5. #5
    Join Date
    Oct 2000
    Location
    Saskatoon, SK, Canada
    Posts
    3,925
    Choose the nologging option so that you might be able to speed up things.

    Thanx
    Sam



    Life is a journey, not a destination!


  6. #6
    Join Date
    Oct 2000
    Posts
    80
    I would definitely consider assigning large deletes to a large rollback segment.
    Re: Halo's suggestion that you commit every x rows, I'm not sure how to efficiently do that, and there are reasons why you might want to ensure that the entire delete was done in one swipe. Autocommit does not work here. Autocommit commits every x number of statements, not every x number of records in a single DML operation.
    If you had a table of two million rows, you would need a sizeable rollback segment to accomplish deleting this table all in one shot. Keep the RB segment offline except when you need it, and explicitly assign the delete transaction to it. The proper sizing of this rollback segment depends on how big a delete you're doing, but don't be too shy in sizing it at say initial 1M, next 1M, maxextents 200 because if the operation does blow out your rollback segment, the transaction will take an extra long time because the entire delete will have to be rolled back. The extra wait can be frustrating.
    Re: Nologging suggestion. I'm not aware of this option being available for delete operations, but if I'm wrong, somebody please pipe up and let me know.

    Johnny D.

  7. #7
    Join Date
    Jun 2000
    Posts
    417
    Committing every 1000 or some arbitrary number of rows is easy with a simple pl/sql procedure, just delete in a loop, keep track, and after so many records commit.

    If you can, take a cold backup before hand, and doing the deletes with a small pl/sql procedure is probably your easiest option. Otherwise you would have to do them all in one shot (or smaller shots if you did it yourself manually) and you would have to watch out for your RBS, and/or create a larger one to be used by the delete as was suggested.

    I also didn't think nologging was an option but I was too lazy to check the docs this time :)

  8. #8
    Join Date
    Dec 2000
    Posts
    9
    (I'm a new DBA, so please feel free to correct or comment)

    Another point, depending on the # of rows in the table and the number of rows that you're deleting, an index may help or hurt the delete. This is something the optimizer might take care of, but you might check the explain plan to verify. Assuming that you already have an index on the selection criteria, e.g. client id or name, and that you're running in cost-based mode, if our selection criteria returns > 5-7% of the rows in the table, and by bulk delete I assume it does, the index may actually slow the delete.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width