I have a sub partitioned table and I was trying to truncate a partition by doing this.
alter table xxx truncate partition xx_p1 (32 partitions)
this partition had 0 records and took 10 mins to complete
Then truncated a different partition with 1000000 records in it. This took around the same time. So obviously the physical deleting of the data took a nominal amount of time. So why does it take so long to find the partition and then delete the data.
Simple truncating the entire table of about 15000000 million records took forever to complete. On another system we have we can truncate a 200000000 million record table in about 10 minutes (on a SunFire Solaris 8 with 24 CPUs and 8 racks of T3 cabinets).
this system is a solaris 8 system, on an E10K, 8 CPU's loadsa memory and EMC storage disks. The laod is evenly balanced across tablespaces and disks and Oracle 220.127.116.11 (32 Bit) so no sloppy machine by any stretch
Deleting the records from the table (simple using delete) took a matter of seconds, 5000000 records took 2 minutes or something, now deleting should take longer because of the rollback and stuff.
Anyone got any idea as to why the truncate takes so long?
ok, well im new to this DBA lark, fink I got confused, the tablespace and table were both created with storage parameters supplied, large initial extent. It wont of extended that many times with a fairly small next extent anyway. Confused all the DBAs in our office
wasnt really supposed to be like that, but the tablespace is like 20Gb, each partition had an initial extent of 32K (as was the next extent), each partition in the table holds 1 Million rows (all a mistake of course) so can see why it extended so much and why truncating was taking so long. Going to have to export the data and reimport but also resizing the partitions/tables so this wont happen again.
We had another table with 250,000 extents :-)
Kinda gets tricky with only having a maximum 2Gb size for an export file so will have to go round exporting on a partition by partition basis
Originally posted by davey23uk Kinda gets tricky with only having a maximum 2Gb size for an export file so will have to go round exporting on a partition by partition basis
No need for that. You can make exp to write the export to multiple smaller dmp files. For example if your export will result in the file of 7GB, you can instruct it to write into 4 smaller files, for example 2000 MB each (the last one will be smaller, of course). The command will be something like: