deleting rows from table
DBAsupport.com Forums - Powered by vBulletin
Page 1 of 2 12 LastLast
Results 1 to 10 of 11

Thread: deleting rows from table

  1. #1
    Join Date
    Jun 2001
    Posts
    150
    what is a best way of deleting large amount of rows from a table that has 71 million rows in it. Will it be a good option to drop indexes first before deleting data. or indexes will help in deleting data.
    I am thinking of creating a another table with selective rows and then drop old table and indexes and then create new index to new table.

    Any ideas!!
    UKDBA

  2. #2
    Join Date
    Aug 2001
    Location
    chennai,bangalore
    Posts
    840
    when deleting data index will remain.So there is not point in droping index and then deleting.
    After u delete record from the table just rebuild the index,then it should be ok.

    cheese
    anandkl
    anandkl

  3. #3
    Join Date
    May 2001
    Location
    San Francisco, California
    Posts
    510
    I would drop the indexes first and delete rows in a table for example a million at a time. Find a column that gives you a range (for example order_date between .. and..) and delete rows for that range so that you do not run into rollback segment problems. Commit after each delete operation.
    Remember the Golden Rule - He who has the gold makes the rules!
    ===================
    Kris109
    Ph.D., OCP 8i, 9i, 10g, 11g DBA

  4. #4
    Join Date
    Apr 2001
    Location
    Czechia
    Posts
    712

    An idea :-))

    * create table new_table nologging as select * from old_table where your_criteria;
    * create indexes nologging on new_table(... columns ...);
    * alter table new_table logging;
    * alter indexes logging;
    * !! BACKUP DATABASE !! *
    * drop old_table;
    * rename new_table to old_table

    Use of NOLOGGING reduces time consumed by moving large data as it produces less redo.
    It's a bit difficult but definitely very fast. If you have constraints referencing the table it will be even more difficult.

    I'd recommend to try the technique on a test table ...

    HTH,
    Ales

  5. #5
    Join Date
    May 2001
    Location
    San Francisco, California
    Posts
    510
    I would also suggest disabling all key constraints and database triggers. Hitting a foreign key constraint violation message when you are trying to delete a million rows is not fun.

    [Edited by kris109 on 02-20-2002 at 11:50 AM]
    Remember the Golden Rule - He who has the gold makes the rules!
    ===================
    Kris109
    Ph.D., OCP 8i, 9i, 10g, 11g DBA

  6. #6
    Join Date
    Jan 2001
    Posts
    3,131
    You may want to export that table just as a precaution before you start "experimenting".
    When in doubt TRUNCATE!

    MH
    I remember when this place was cool.

  7. #7
    Join Date
    Sep 2001
    Location
    NJ, USA
    Posts
    1,287
    Ales describe best way for solve this problem.
    I did it many times with good perfomance and goog results.

    Try to do that.

  8. #8
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    UKDBA,

    How many CPU does the box have?
    Is the tablespace in which the table is created having many data files? If so, how many?

    What is % of rows to be deleted?

    Does the OS support Async I/O?

    How many db_writers are configured?

    USADBA

    [Edited by tamilselvan on 02-20-2002 at 02:33 PM]

  9. #9
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    delete the rows, then do alter table xxx move to sa,me tablespace this will rebuild your table then rebuild your index

    if you have primary keys do

    alter table xxx disable constraint yyy validate (check the syntax for this I dont remember right)

    and drop other indexes if you want to speed up your delete process

    if you have multiple CPU enable parallel DML

    alter table move is handy, you dont have to worry about constraints

  10. #10
    Join Date
    Feb 2001
    Posts
    389
    Basic Q, For 71 million rows why didn't u have a partitioned table.
    If the index space would be reused during further inserts/updates and delete involves considerable
    amount of rows, then do not drop index , else drop the index ,then do a delete and commit
    every number of records.
    After delete , recreate the index.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  



Click Here to Expand Forum to Full Width