i cant understand why this happened please suggest.
I am using sqlldr for populating a table for 800M rows.
at the sqlldr...
i have used parallel=true , direct=t
at the table level
(initial extent 2G next 2G minextents 11 ) and hash partitions 8.
and altered the next to 500M after the table creation , so that during SqlLdr process will create a temp segment for min 500M .
so if i inquiry BEFORE sqlldr process for segment_name in dba_segments i get
tablespace bytes extents
1 22G 11
2 22G 11
3 22G 11
4 22G 11
5 22G 11
6 22G 11
7 22G 11
8 22G 11
AND now aslong as sqlldr process is running, and the querr
for dba_segmetns where segement_type='TEMPORARY' - i get
tablespace bytes extents
1 500m 1
2 500m 1
3 500m 1
4 500m 1
5 500m 1
6 500m 1
7 500m 1
8 500m 1
and the extents increase in the incrementals of 500M as pctincrease is 0.
Till here ever thing is normal.
but after the completion of process when i inquire the batch_table its with surprising number of extents cause the
extents size ranged from max of 2G to min of 12K, while it would have not dropped below 500M minimum.
and the total extents came upto 800+.
Please advise me whether i was wrong some where ?
When you run parallel insert through SQL*LOADER, Oracle creates as many extents equal to the parallel processes. If the number of parallel processes is 5, then 5 extents will be created in the tablespace for the table to start with. Then extents will be added once a extent is filled. And finally when the job is done, the free space in the last extents of each parallel proces is freed. The last extent size may be equal to initial or next extent size even though pctincrease is set to 0. This leads to unequal extent sizes within the tablespace.
To over come this flaw, you need to set the event 10901 at the system level before you start laoding data into the table.
[Edited by tamilselvan on 02-13-2002 at 11:34 AM]
i guess,if i am goint to set the event, i could avoid the trimming of extents and i will not be using these partial free extents anymore in my next sqlloader process.
But I am planning to avoid fragmentation, and want oracle to use the pre allocated extents.
Will Insert in the ctl file and direct=true without parallel options would help me to do that?
Please correct me if I am wrong.
You try to insert 800 Million rows. I think the source data are in multiple flat files. Avoid using parallel option if you want unequal extents in a tablespace. Use DIRECT load option with unrecoverable defined in the control file.
You can run multiple direct loads if you do not want to use parallel option.
Set INITIAL and NEXT 500MB size at table level.