Issues with Optimizer_Features_Enable
DBAsupport.com Forums - Powered by vBulletin
Results 1 to 9 of 9

Thread: Issues with Optimizer_Features_Enable

  1. #1
    Join Date
    Mar 2009
    Posts
    4

    Issues with Optimizer_Features_Enable

    Hi All,

    We have upgraded our databases from 9.2.0.8 to 10.2.0.3. After the upgrade we have set Optimizer_Features_Enable=10.2.0.3. We have faced the performance slow down after this setup. As a workaround to enhance the performance Optimizer_Features_Enable (OFE) set back to 9.2.0, after this the performance is as expected.

    Now we are planning to set this parameter to 10.2.0.3. There are few processes which are running fine (1 hr) with OFE 9.2.0, taking 5 hrs time with OFE 10.2.0.3.

    Please share if you come across this situation.

    Regards
    Srini

  2. #2
    Join Date
    Apr 2001
    Location
    Czechia
    Posts
    712
    There's no easy solution to that, I think. You have to find what slows down the processes. It might be just one or two queries. Use AWR, SQL trace ...
    Ales
    The whole difference between a little boy and an adult man is the price of toys

  3. #3
    Join Date
    Mar 2009
    Posts
    4
    Thanks Ales.

    There are 2 SQL's (one DELETE & one INSERT) taking 5 hours in one process.

    The Delete query is taking 2 and half hours which is deleting 26 million rows from the table (Total 41 million rows), which is performing a full table scan with.

    The wait event for this query is "db file sequential read" even though it is a FTS.

    -----------------------------------------------------------------------------
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
    -----------------------------------------------------------------------------
    | 0 | DELETE STATEMENT | | 26M| 2141M| 127K (2)|
    | 1 | DELETE | PS_COMBO_DATA_TBL | | | |
    |* 2 | TABLE ACCESS FULL| PS_COMBO_DATA_TBL | 26M| 2141M| 127K (2)|
    -----------------------------------------------------------------------------

    Regards
    Srini

  4. #4
    Join Date
    Apr 2001
    Location
    Czechia
    Posts
    712
    FTS + sequential read?
    Are there "before/after delete" triggers on the table?
    Also check the referential integrity constraint if there are dependent tables on the mentioned table.
    Ales
    The whole difference between a little boy and an adult man is the price of toys

  5. #5
    Join Date
    Nov 2000
    Location
    Pittsburgh, PA
    Posts
    3,967
    Also if you are constantly deleting rows based on date you might consider partitioning the table by date. For example if you 1 year of data, you can either partition by month or quarter and then drop the oldest partitions.

    But triggers can definitely be an issue, especially when the are trying to do too much.
    this space intentionally left blank

  6. #6
    Join Date
    Mar 2009
    Posts
    4
    Thanks Ales,

    There are no triggers present on this table. Referential integrity also not present.

    Interstigly, while doing the select on this table the data is accessed by db file scattered reads.

    db file sequential read occured only for the delete query !!! These single block reads are causing delay.

  7. #7
    Join Date
    Mar 2009
    Posts
    4
    Thanks for the inputs.

    The rows are not deleting constantly based on date. Moreover this query is a Peoplesoft delievered process.

  8. #8
    Join Date
    Nov 2000
    Location
    Pittsburgh, PA
    Posts
    3,967
    Quote Originally Posted by srsrinu View Post
    Thanks for the inputs.

    The rows are not deleting constantly based on date. Moreover this query is a Peoplesoft delievered process.
    Perhaps you should try posting an explain plan, "set autotrace on" and "set timing on", and then run the query and delete statements and post the results.
    this space intentionally left blank

  9. #9
    Join Date
    Mar 2007
    Location
    Ft. Lauderdale, FL
    Posts
    3,554
    Quote Originally Posted by srsrinu View Post
    We have upgraded our databases from 9.2.0.8 to 10.2.0.3. After the upgrade we have set Optimizer_Features_Enable=10.2.0.3. We have faced the performance slow down after this setup.
    Oracle has introduced a series of changes in Ora10g you want to take into consideration.

    Ora9i had an I/O based cost optimazer while Ora10g has by default a CPU based cost optimazer, this means Ora10g would try to use less CPU by doing more FTS. An alternative is to set _optimizer_cost_model parameter to "io" If you consider touching this hidden parameter please check before with Oracle support.

    Ora9i gathered performance statistics might not be enough for the same database after upgrading to Ora10g. You may want to gather fresh stats on Ora10g using dbms_stats. The more histograms you get, the more chances Ora10g has of deciding for a efficient execution plan.

    Finally, when you set optimizer_features_enable=10.2.0.3 be sure optimizer_mode IS NOT SET to "choose", it should be usually set to first_rows for a OLTP system while set to all_rows for a DSS system. Ora10g default is all_rows.
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  



Click Here to Expand Forum to Full Width