-
Automatic PGA and Batch Loads
Hello folks,
I'd like to get your recommendations on how to handle dealing with large bulk loads/queries which need more memory while having workarea_size_policy = auto.
We typically want our instance set to auto worklaod and set our pga_aggregate_target to something. I understand how this works and how it balances against number of sessions (possibly lowering the amount of memory per session, each session getting X% if parallel, etc).
Let's say, however, we have a single process that should "break" the rules and should set it's own manual target for sort_area_size and hash_area_size. What's the recommendation? Should we just alter session to set workarea_size_policy = manual and set up our sort/hash in that session? Or is there a better way to deal with this scenario?
Further, let's say the process isn't automated - not in a script - but rather is a hypothetical user query from some BI tool or something that maybe can't alter session. Any good way to mark a particular query such that it always breaks the auto rule (only for that query, of course - we want everyone else to be balanced)?
Thanks for any input.
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|