-
Possibly irrelevant background info:
-----------------------------------------
Okay, here's the scenario. I spent several weeks optimizing an OLTP database. I had many restrictions, so the optimizations were done primarily through initializations parameters, indexes and histograms.
Now, the reporting database is a BCV copy of the production database. Therefore, the reporting database will end up inhereting the majority of the changes I proposed for the OLTP database.
In my tests, this has actually turned out not to be much of a problem. The changes actually helped performance quite a bit. These changes, together with re-writing the single worst reporting query have resulted in LRs, CPU and time all dropping 60-80%.
All of my tests are performed on an isolated server and my tests are sequential. I am not really set up to run my tests in parallel
Pertinent Info:
----------------
If I turn on and tune the Parallel Query option, I can drop the LRs and time by another 30% and 10% respectively. The CPU actually goes up by 30%. Tables Scans, needless to say, go through the roof (50x). This is also why the LRs dropped so much.
I tuned the parallel option based on an assumption of 10 concurrent queries running at a time. We can actually control this - it is fed via AppWorx. Further, the production reporting server has 8 processors, whereas my testing box only has 4.
Question:
-----------
So what are people's opinions on this? Which solution should I advocate? Is it worth that extra 10% in time considering the massive increase in Long Table Scans? Considering that my buffer pool is not partitioned, will these table scans kill my cache and end up hurting more than they help? Will I end up losing that 10% when I have 10 queries running simultaneously? What are your thoughts? Inquiring minds want to know.
Thanks,
- Chris
-
If the database is exclusively for reporting, then Parallel Query is the best.
-
Originally posted by chrisrlong
Question:
-----------
So what are people's opinions on this? Which solution should I advocate? Is it worth that extra 10% in time considering the massive increase in Long Table Scans? Considering that my buffer pool is not partitioned, will these table scans kill my cache and end up hurting more than they help? Will I end up losing that 10% when I have 10 queries running simultaneously? What are your thoughts? Inquiring minds want to know.
Depends. Is your table partitioned so that your I/O is maximized? Does a PQ max out one of your physical devices?
Actually, the second PQ may perform better if your buffer cache is large enough. The first PQ would load up the buffer cache and queries 2-10 would use the cached blocks.
This is one of those situation that warrent testing, testing, testing to determine the correct answer. Sorry...
Jeff Hunter
-
Parallel Queries are almost always better in the long term. Buffer cache initialization, number of users, table structures (are there several extents so each PQ can go after its own extent) are major factors. Parallel queries work best in cross table joins and fragmented tables. Whether these apply to your db, I don't know.
Joseph R.P. Maloney, CSP,CDP,CCP
'The answer is 42'
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|