-
Better yet -- Get rid of the select!
Try just a delete:
DELETE FROM table_name WHERE DATE='01-JAN-2002';
commit;
However, performing a Delete DOES NOT RECLAIM space in the tablespace. You will be able to add to the table where the delete occurred. The space is not released for useage for other tables in the tablespace.
---------------------------------------------------------------------------------
How many rows are in the table?
OR just do the CTAS approach:
create table table_copy as select *
from your_table
where date > to_date('01-jan-2002');
( this will reclaim the space! )
truncate table your_Table;
insert into your_table (select * from table_copy);
Drop table table_copy;
******** WARNING ****
DO NOT SCRIPT THIS. Run it manually.
Take an export of the table prior to running the commands.
*********************
-
Can anybody reply me?
How about partitioning the table if the data is huge and then removing the partiton which contains old data?
-
Originally Posted by tabreaz
Can anybody reply me?
How about partitioning the table if the data is huge and then removing the partiton which contains old data?
1. That is a design change.
2. There may not be enough space to accomplish the partitioning without first clearing the table.
3. good idea, requiring thorough application testing: (maybe vendor table etc..)..
-
You can also try this:
Code:
SET SERVEROUTPUT ON
BEGIN
DBMS_OUTPUT.ENABLE(1000000);
LOOP
DELETE table_name
WHERE DATE='01-JAN-2002'
AND rownum < 10001;
DBMS_OUTPUT.PUT_LINE( SQL%ROWCOUNT || ' rows deleted!');
EXIT WHEN SQL%ROWCOUNT = 0;
COMMIT;
END LOOP;
COMMIT;
END;
/
It's not likely to be faster than a straight delete,
but it should work without getting the snapshot too old error message.
-
Originally Posted by ixion
1. That is a design change.
why? A table is a table is a table. You'd have to check the query plans, but assuming you picked a decent partitioning key, you're all set.
2. There may not be enough space to accomplish the partitioning without first clearing the table.
not true, just exchange partition with the big table. segments stay the same, just the data dictionary is updated.
3. good idea, requiring thorough application testing: (maybe vendor table etc..)..
definitely requires testing, although seems like a lot of work just to delete some data.
Jeff Hunter
-
Originally Posted by gandolf989
You can also try this:
Code:
SET SERVEROUTPUT ON
BEGIN
DBMS_OUTPUT.ENABLE(1000000);
LOOP
DELETE table_name
WHERE DATE='01-JAN-2002'
AND rownum < 10001;
DBMS_OUTPUT.PUT_LINE( SQL%ROWCOUNT || ' rows deleted!');
EXIT WHEN SQL%ROWCOUNT = 0;
COMMIT;
END LOOP;
COMMIT;
END;
/
It's not likely to be faster than a straight delete,
but it should work without getting the snapshot too old error message.
This is the second best solution I've seen.
Jeff Hunter
-
Originally Posted by marist89
This is the second best solution I've seen.
Beats getting the second best poker hand at a table.
-
As mentioned in the earlier replies; do have couple of big size rollback segments. Then execute the delete query by filtering the records by date or any numeric values etc... As it claims the table has got a good number of records.. the deletion cause rollback segment population. To maintain the rollback segment without causing the error the data inserting into rollback segment needs to be controlled; this can be possible by filtering the records in the delete statement.
-
Originally Posted by helloravi
As mentioned in the earlier replies; do have couple of big size rollback segments. Then execute the delete query by filtering the records by date or any numeric values etc... As it claims the table has got a good number of records.. the deletion cause rollback segment population. To maintain the rollback segment without causing the error the data inserting into rollback segment needs to be controlled; this can be possible by filtering the records in the delete statement.
no. You're not getting the ORA-1555 because of the number of deletes, you're getting it because you can't get a consistent view of the data. You have to externalize the loop by using a "calculatable" entity or an endless loop doing deletes in small chunks until there are no records left. These two methods have already been shown.
Jeff Hunter
-
This block will be helpful, if you have sufficient memory.
Code:
declare
type tab_type is table of tablename%rowtype;
lobj tab_type;
begin
select *
bulk collect into lobj
from tablename
where DATE <> '01-JAN-2002';
execute immediate 'create table newtable as select * from tablename
where 1 = 2';
execute immediate 'drop table tablename';
forall i in 1..lobj.count
insert into newtable
values lrtn(i);
commit;
end;
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|