DBAsupport.com Forums - Powered by vBulletin
Results 1 to 2 of 2

Thread: Automatic PGA and Batch Loads

  1. #1
    Join Date
    Jun 2007
    Posts
    23

    Question Automatic PGA and Batch Loads

    Hello folks,
    I'd like to get your recommendations on how to handle dealing with large bulk loads/queries which need more memory while having workarea_size_policy = auto.

    We typically want our instance set to auto worklaod and set our pga_aggregate_target to something. I understand how this works and how it balances against number of sessions (possibly lowering the amount of memory per session, each session getting X% if parallel, etc).

    Let's say, however, we have a single process that should "break" the rules and should set it's own manual target for sort_area_size and hash_area_size. What's the recommendation? Should we just alter session to set workarea_size_policy = manual and set up our sort/hash in that session? Or is there a better way to deal with this scenario?

    Further, let's say the process isn't automated - not in a script - but rather is a hypothetical user query from some BI tool or something that maybe can't alter session. Any good way to mark a particular query such that it always breaks the auto rule (only for that query, of course - we want everyone else to be balanced)?

    Thanks for any input.

  2. #2
    Join Date
    Nov 2005
    Posts
    32
    Should we just alter session to set workarea_size_policy = manual and set up our sort/hash in that session? Or is there a better way to deal with this scenario?
    It depends. Setting workarea_size_policy to manual would be the simplest thing to do. You haven't included your database version but with 9i, the maximum pga memory allocation per process is 200MB while the maximum work area allocation for a single work area is the minimum of (5% of pga_aggregate_target) or 100MB. With that in mind if you want to change the maximum allocation limits then you'll have to alter a couple of hidden parameters. On the other hand with 10G, the max allocations are all driven off of the pga_aggregate_target value and so increasing the target should be good for max allocations per process and per work area.

    Further, let's say the process isn't automated - not in a script - but rather is a hypothetical user query from some BI tool or something that maybe can't alter session. Any good way to mark a particular query such that it always breaks the auto rule (only for that query, of course - we want everyone else to be balanced)?
    You could force all user queries to be run through a specific user that has an on_logon trigger which sets workarea_size_policy to manual. An additional option for you with 10G would be to try and embedd the opt_param hint which sets parameter values just for the query:

    select /*+ opt_param('workarea_size_policy','manual') */ ......

    I've tested the opt_param hint with other optimizer values but not specifically workarea_size_policy but you can give it a shot and see if it works.

    Good luck........

    http://www.dbaxchange.com

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width