I have noticed that I have a couple of tables in our database which experienced massive inserting and deleting on a daily basis.
For example we have a table calls events_log I run this quest on that :
select segment_name Table_Name, num_rows Number_of_Rows , sum(bytes)/1024 Size_in_KB
from user_segments, user_tables
where table_name = segment_name and
segment_type = 'TABLE' and
table_name = 'EVENTS_LOG'
group by segment_name, num_rows
1- I read that imp/exp must restore space for HWM high. Why it didn't do that ?
2- Does my trucate / insert solution fine? Do you suggest that I schedule it to run on table periodically?
3- What's your approach to this case?
The export/import will do the job if you (a) use the compress=y option and drop the table before the import, or (b) truncate the table before the import. Either way will cause all the extents to be removed and created as needed.
The truncate/insert solution is fine. You might want to parallelise it to make it quicker. It's easier to automate this since you can do it all from within PL/SQL.
If the number of rows is fairly constant over time I don't worry to much. The empty extents will be reused as they are needed. If you've got lots of full table scans you will be reading lots of empty blocks up to the high water mark when the table is less empty.
You have to weigh up the cost of the extra blocks being read against the downtime of recreating the table. This will give you an indication of how regularly you can or should recreate the table.
So I understand that HWM comes to play onlt in FTS time.
How can I paralleized the truncate/insert solution?
Does Hanky comment is important. Do I need to rebuild all indexes for these tables everytime?