-
what is a best way of deleting large amount of rows from a table that has 71 million rows in it. Will it be a good option to drop indexes first before deleting data. or indexes will help in deleting data.
I am thinking of creating a another table with selective rows and then drop old table and indexes and then create new index to new table.
Any ideas!!
UKDBA
-
when deleting data index will remain.So there is not point in droping index and then deleting.
After u delete record from the table just rebuild the index,then it should be ok.
cheese
anandkl
anandkl
-
I would drop the indexes first and delete rows in a table for example a million at a time. Find a column that gives you a range (for example order_date between .. and..) and delete rows for that range so that you do not run into rollback segment problems. Commit after each delete operation.
Remember the Golden Rule - He who has the gold makes the rules!
===================
Kris109
Ph.D., OCP 8i, 9i, 10g, 11g DBA
-
An idea :-))
* create table new_table nologging as select * from old_table where your_criteria;
* create indexes nologging on new_table(... columns ...);
* alter table new_table logging;
* alter indexes logging;
* !! BACKUP DATABASE !! *
* drop old_table;
* rename new_table to old_table
Use of NOLOGGING reduces time consumed by moving large data as it produces less redo.
It's a bit difficult but definitely very fast. If you have constraints referencing the table it will be even more difficult.
I'd recommend to try the technique on a test table ...
HTH,
Ales
-
I would also suggest disabling all key constraints and database triggers. Hitting a foreign key constraint violation message when you are trying to delete a million rows is not fun.
[Edited by kris109 on 02-20-2002 at 11:50 AM]
Remember the Golden Rule - He who has the gold makes the rules!
===================
Kris109
Ph.D., OCP 8i, 9i, 10g, 11g DBA
-
You may want to export that table just as a precaution before you start "experimenting".
When in doubt TRUNCATE!
MH
I remember when this place was cool.
-
Ales describe best way for solve this problem.
I did it many times with good perfomance and goog results.
Try to do that.
-
UKDBA,
How many CPU does the box have?
Is the tablespace in which the table is created having many data files? If so, how many?
What is % of rows to be deleted?
Does the OS support Async I/O?
How many db_writers are configured?
USADBA
[Edited by tamilselvan on 02-20-2002 at 02:33 PM]
-
delete the rows, then do alter table xxx move to sa,me tablespace this will rebuild your table then rebuild your index
if you have primary keys do
alter table xxx disable constraint yyy validate (check the syntax for this I dont remember right)
and drop other indexes if you want to speed up your delete process
if you have multiple CPU enable parallel DML
alter table move is handy, you dont have to worry about constraints
-
Basic Q, For 71 million rows why didn't u have a partitioned table.
If the index space would be reused during further inserts/updates and delete involves considerable
amount of rows, then do not drop index , else drop the index ,then do a delete and commit
every number of records.
After delete , recreate the index.
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|