-
I have a sub partitioned table and I was trying to truncate a partition by doing this.
alter table xxx truncate partition xx_p1 (32 partitions)
this partition had 0 records and took 10 mins to complete
Then truncated a different partition with 1000000 records in it. This took around the same time. So obviously the physical deleting of the data took a nominal amount of time. So why does it take so long to find the partition and then delete the data.
Simple truncating the entire table of about 15000000 million records took forever to complete. On another system we have we can truncate a 200000000 million record table in about 10 minutes (on a SunFire Solaris 8 with 24 CPUs and 8 racks of T3 cabinets).
this system is a solaris 8 system, on an E10K, 8 CPU's loadsa memory and EMC storage disks. The laod is evenly balanced across tablespaces and disks and Oracle 8.1.7.4 (32 Bit) so no sloppy machine by any stretch
Deleting the records from the table (simple using delete) took a matter of seconds, 5000000 records took 2 minutes or something, now deleting should take longer because of the rollback and stuff.
Anyone got any idea as to why the truncate takes so long?
Cheers
Dave
-
coz truncate has to deallocate extents, if your table is dictionary managed then it takes long long time if there are many extents
-
table isnt dictionary managed and it hasnt extended at all hardly, only been around a few weeks.
Just seemed really really excessive
-
well then that's smth strange coz it takes me 10 secs to truncate a 5000 extents (128K each) table in a LMT in Tru64
-
ok, well im new to this DBA lark, fink I got confused, the tablespace and table were both created with storage parameters supplied, large initial extent. It wont of extended that many times with a fairly small next extent anyway. Confused all the DBAs in our office
oh well, ill check out the extents again
ta
-
turned out there were 60000 extents at 32K each, probably why it took so long then. :-)
-
Keep the number of extents less than 100 for all objects irrespective of the objects on LMT or DMT.
-
wasnt really supposed to be like that, but the tablespace is like 20Gb, each partition had an initial extent of 32K (as was the next extent), each partition in the table holds 1 Million rows (all a mistake of course) so can see why it extended so much and why truncating was taking so long. Going to have to export the data and reimport but also resizing the partitions/tables so this wont happen again.
We had another table with 250,000 extents :-)
Kinda gets tricky with only having a maximum 2Gb size for an export file so will have to go round exporting on a partition by partition basis
least it will be interesting :-)
-
Originally posted by davey23uk
Kinda gets tricky with only having a maximum 2Gb size for an export file so will have to go round exporting on a partition by partition basis
No need for that. You can make exp to write the export to multiple smaller dmp files. For example if your export will result in the file of 7GB, you can instruct it to write into 4 smaller files, for example 2000 MB each (the last one will be smaller, of course). The command will be something like:
exp un/pwd file=(expdat1.dmp, expdat2.dmp, expdat3.dmp, expdat4.dmp) filesize=200MB .....
Jurij Modic
ASCII a stupid question, get a stupid ANSI
24 hours in a day .... 24 beer in a case .... coincidence?
-
250000 extents!
nice! at least have a record now
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|