-
Delete with loop and commit
Any tuning suggestion for this query? We run this every day to delete around 2 million rows(it is 10G)
BEGIN
LOOP
delete from table_name where
ntimestamp# (less than)trunc(sysdate-3)+6/24 and rownum <= 20000;
exit when SQL%rowcount = 0;
commit;
END LOOP;
commit;
END;
/
Note:When I put < it is not taking...that's why typed as 'less than' in the above
-
Any tuning suggestion for this query? We run this every day to delete around 2 million rows(it is 10G)
How many rows are in the table?
-
Partitioning on ntimestamp# and drop partition instead of delete is the riht one
Also setting the parameter commit_write to batch,nowait AT SESSION LEVEL could help
PLS NOTE:
That can be dangerous. Read carefull about this parameter before using it
Regards
Boris
-
This question has been answered before.
Try looking at http://www.dbasupport.com/forums/sho...ete+collection
Originally Posted by gandolf989
You can also try this:
Code:
SET SERVEROUTPUT ON
BEGIN
DBMS_OUTPUT.ENABLE(1000000);
LOOP
DELETE table_name
WHERE DATE='01-JAN-2002'
AND rownum < 10001;
DBMS_OUTPUT.PUT_LINE( SQL%ROWCOUNT || ' rows deleted!');
EXIT WHEN SQL%ROWCOUNT = 0;
COMMIT;
END LOOP;
COMMIT;
END;
/
It's not likely to be faster than a straight delete,
but it should work without getting the snapshot too old error message.
Of course my solution was endorsed by Marist89,
Originally Posted by marist89
This is the second best solution I've seen.
-
Originally Posted by tamilselvan
How many rows are in the table?
It is 8 to 10 million(keeps adding every day and deleting old data)
-
Originally Posted by Ramg
It is 8 to 10 million(keeps adding every day and deleting old data)
If you are adding 8-10 million rows per day and you query by the day added among other things then partitioning by day would be best as you can create a new partition every day and drop the oldest partition everyday. But if you don't query by date, then partitioning by date may not help you. But you can always try partitioning a few ways and seeing which one works best.
-
I would try parallel DML if I wanted to delete 2 M rows out of 10 M rows.
-
Bore has my vote here.
I would certainly partition the table by date, if you are on 10g Oracle will add partitions as needed and purging will take a split second.
Pablo (Paul) Berzukov
Author of Understanding Database Administration available at amazon and other bookstores.
Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.
-
Originally Posted by gandolf989
behind mine, of course...
Originally Posted by marist89
You need to externalize the loop from the database. For example, if you need to delete 5 years of data, you can delete day by day with something like:
Code:
declare
start_dt DATE;
begin
start_dt = to_date('01/01/2000','mm/dd/yyyy');
while start_dt < trunc(sysdate-365) loop
delete from yourTab where myDate = start_dt;
commit;
end loop;
end;
Jeff Hunter
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|