
I found my temp tablespace is full now. There is only one segment inisde it and that occupied all space. How do I deal with it? How do I know when it got created? ( I know it has been there for at least 3 days.)
Thank you!
SQL> select * from dba_segments where tablespace_name='TEMP';
OWNER SEGMENT_NAME PARTITION_NAME SEGMENT_TYPE TABLESPACE_NAME HEADER_FILE HEADER_BLOCK BYTES BLOCKS EXTENTS INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS PCT_INCREASE FREELISTS FREELIST_GROUPS RELATIVE_FNO BUFFER_
                  
SYS 4.5362 TEMPORARY TEMP 4 5362 268304384 32752 2047 131072 131072 1 2147483645 0 1 1 4 DEFAULT
1 row selected.

Three steps
alter tablespace temp default storage (next 12m);
***Next should be different than original next size to force db releasing space***
alter tablespace temp default storage (next 8m);
***change back to original***
alter tablespace temp coalesce

This topic has been discussed many times lately in the last 2 weeks. Search the forum and you will find the answer
Just a suggestion, temporary extents are used over and over again and there is always one segment only in temporary tablespace
What bill_lee suggests, its pointless

Recreate your TEMP tspace using the tempfile keyword and assure that sort_area_size = block_size * sort_multiblock_read_count

Originally posted by julian
assure that sort_area_size = block_size * sort_multiblock_read_count
The above formula should by no means be "a rule of thumb" for sizing SORT_AREA_SIZE. In fact, generaly SORT_AREA_SIZE is (and should be) much larger than DB_BLOCK_SIZE*SORT_MULTIBLOCK_READ_COUNT. Vast majority of sorts are done directly in sort area, without writing sort runs to disk, so you should set your SORT_AREA_SIZE large enough to have as few disk sorts as possible (as much as your memory resources allow). On the other hand, SORT_MULTIBLOCK_READ_COUNT parameter is used only when SORT_AREA_SIZE is not sufficient for the entire sort operation, where in the last phase partial sort runs must be read from disk (temp tablespace) and merged together. So SORT_AREA_SIZE should not be a function of SORT_MULTIBLOCK_READ_COUNT!
Jurij Modic
ASCII a stupid question, get a stupid ANSI
24 hours in a day .... 24 beer in a case .... coincidence?

And what do you mean by "So SORT_AREA_SIZE should not be a function of SORT_MULTIBLOCK_READ_COUNT!"? I think you want to say "not dependent on", right?

that is mathematical term, f(x) is a function of blah blah etc

Originally posted by pando
that is mathematical term, f(x) is a function of blah blah etc
Well, it should be, if you care about your sorts on disk of course :)

So SORT_AREA_SIZE should not be a function of SORT_MULTIBLOCK_READ_COUNT!
Here is what I found concerning your statement above:
In a disk sort the input rows are sorted into a number of "sort runs". These sort runs are then merged. The number of sort runs that can be merged simultaneously is called the "merge width". If the initial number of sort runs is greater than the merge width, then multiple merge passes will be required. The merge width depends both on the sort_area_size and the sort_multiblock_read_count. Up to 90% of the sort_area_size may be used for read buffers during a merge, and the rest is used for write buffers. The size of each read buffer is sort_multiblock_read_count * db_block_size, and 2 read buffers are needed per sort run to enable asynchronous read ahead (but only if disk_asynch_io is TRUE). The same write buffer configuration is used during the initial sort phase as during the merge phase, therefore, the size of each sort run is approximately 90% of the sort_area_size.
This information can be used to calculate the sort_area_size that would be needed to avoid a secondary merge phase for a large sort. For example, if it is known that the largest sort in a certain batch process sorts 12G of data, and the sort_multiblock_read_count is 8 blocks (of 8K each), then the number of initial sort runs will be 12G / (0.9 * sort_area_size), and the maximum merge width will be round(0.9 * sort_area_size / 64K) / 2. Therefore, to ensure that the number of initial sort runs is no greater than the maximum merge width, the sort_area_size must be at least sqrt(12G * 64K * 2 / 0.81) or about 43.6M. There would be virtually no benefit in using a sort_area_size greater than 43.6M but less than 12G.

Yes, Julian, I know this article of Steve Adams. Ok, based on this we might claim that SORT_AREA_SIZE is or should be the function of SORT_MULTIBLOCK_READ_COUNT. However Steve's example is a litle bit artificial. He starts with a given amount of data to be sorted (12G) and a fixed value of SORT_MULTIBLOCK_READ_COUNT (8) and based on that he calculates the optimal sort area size. But he could also turn the things around and say:
"I have 12G data to sort any the sorta rea I can afford is 30Megs  so how should I set the parameter SORT_MULTIBLOCK_READ_COUNT to avoid additional merge passes? "
In this case the SORT_MULTIBLOCK_READ_COUNT is the function of SORT_AREA_SIZE and not the other way around. Jonathan Lewis discusses this as well in http://www.jlcomp.demon.co.uk/app_c.html .
But my main point was: your equation for calculating the size of SORT_AREA_SIZE was totaly useless and wrong. Most of the times you calculate your sort area size based on concurent sorts to be run and your available memory, all with a goal to avoid disk sorts. With your equation the sort area size would be ridicolously small and would in fact impose many disk sorts. The formula is simply wrong, period.
Btw, in his example Steve Adams calculated the optimal sort area to be around 43 Mb. With your formula his sort area shoud be 64Kb! With 43M area the sort will be done with about 30 disk writes, with your sort area of 64K about 20000 disk writes would be needed!
Jurij Modic
ASCII a stupid question, get a stupid ANSI
24 hours in a day .... 24 beer in a case .... coincidence?
Posting Permissions
 You may not post new threads
 You may not post replies
 You may not post attachments
 You may not edit your posts

Forum Rules

Click Here to Expand Forum to Full Width
