DBAsupport.com Forums - Powered by vBulletin
Page 2 of 2 FirstFirst 12
Results 11 to 20 of 20

Thread: Temp tablespace constantly blowing out

  1. #11
    Join Date
    Jan 2001
    Posts
    3,134
    My bad, I always mix up TEMP with redo (undo). Shoot em anyway.

    And abhaysk, thanks for your "contribution"

    MH
    I remember when this place was cool.

  2. #12
    Join Date
    Jan 2003
    Location
    Denver
    Posts
    152
    I used to have a 200 GB sort space but then it was DW and I had to rollup some really big fact tables. Considered doing it with some tool before loading into Oracle but continued with Oracle sorts which I found no way to control its sort algorithm.

    I would set that sessions ram sort area size to 1GB before and set it back aftwards, sort_area_retained_size is pretty important.

    For smaller ones where you are not the one using it, you need to write a little monitor program that uses the guys script above or something appropriate and log who is using how much when. Find out exactly what is going on in there all day.

  3. #13
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    hmm I dont understand why people set 1gb for sort_area_size when over 32mb there is no extra benefit at all

  4. #14
    Join Date
    Apr 2002
    Location
    Phoenix, AZ
    Posts
    175
    Originally posted by anandkl
    And also tell all oracle users to set their session sort_area_size=32MB beforing they executed sql's that do heavy sorting operation as this would reduce the temp tablespace usage and imporve the I/O

    regards
    anandkl
    if using 9i, use PGA_AGREGATE_SIZE. Takes care of lot of issues.
    Sridhar R Patnam

  5. #15
    Join Date
    Jan 2003
    Location
    Denver
    Posts
    152
    Originally posted by pando
    hmm I dont understand why people set 1gb for sort_area_size when over 32mb there is no extra benefit at all
    and why is that? Allowing the job to use as much ram as possible seems to make sense to me, perhaps it cannot use both ram and disk and since it was forced to disk but I would think it would at least help the process somewhere or Oracle needs to improve their sort process.

  6. #16
    Join Date
    Jan 2003
    Location
    india
    Posts
    175
    hi,

    somewhere i read this formula for temp extents.
    somebody can validate this.

    the extent size = (x * sort_area_size) + ( 1 data_block)

    x * sort_area_size - multiple of sort_area_size will not
    cause fragmentation.

    +1 data_block is meant for header.

    if temp is locally managed it is better.

  7. #17
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    Originally posted by BJE_DBA
    and why is that? Allowing the job to use as much ram as possible seems to make sense to me, perhaps it cannot use both ram and disk and since it was forced to disk but I would think it would at least help the process somewhere or Oracle needs to improve their sort process.
    because over 32mb does no extra benefit, becnhmarked, tested and put in production

  8. #18
    Join Date
    Jan 2003
    Location
    Denver
    Posts
    152
    Originally posted by pando
    because over 32mb does no extra benefit, becnhmarked, tested and put in production
    Well my ETL automatation was rolling up all kinds of fact tables and dimensions, building indexes etc and it definately was faster or I would not have put it into production..

  9. #19
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    how much faster, that´s the question

    if using 32mb takes 30 minutes to build and index and 1gb takes a couple of minutes less I am not sure that´s worthy

  10. #20
    Join Date
    Jan 2003
    Location
    Denver
    Posts
    152
    Originally posted by pando
    how much faster, that´s the question

    if using 32mb takes 30 minutes to build and index and 1gb takes a couple of minutes less I am not sure that´s worthy
    Dont recall the details, just that it was important and I put it into the automation, there were hundreds of indexes. This was 'like' materialized views in that we had a fact table and dim tables down to the billing telephone number ( massive ) then repeated as they were rolled up 14 levels, at some point everything was sorting in memory, but the most of the those levels and all the lower levels were forced to disk do to the size. The app would choose the highest possible level that would answer the question in order to work on the smallest amount of data.

    It was always the 'work thru the night get it loaded as fast as you can'. Could have been frustration or wishful thinking but I seem to recall it shaved hours off of a 36 hour process.
    Last edited by BJE_DBA; 01-09-2004 at 08:46 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width