DBAsupport.com Forums - Powered by vBulletin
Results 1 to 9 of 9

Thread: Db_file_multiblock_read_count

  1. #1
    Join Date
    Nov 2001
    Location
    Sheffield, England, UK
    Posts
    78

    Db_file_multiblock_read_count

    Just curious on this one:

    I have an application which is routinely choosing full table scans over a (by research) 100-fold more efficient index access path.

    I have undertaken investigation of index_caching levels, and setting optimizer_index_caching to 20% (which is a rough estimation of what proportion of indexes are cached at any point in time across the whole application), does not change choice of path.

    The maximum db_file_multiblock_read_count is set at 16, and looking in trace files, the maximum multiblock read IS 16. However... this is achieved very, very rarely. The 'average' multiblock_read_count would be nearer 10.

    If I set db_file_multiblock_read_count at the session level to be just '1' less than the current 16, indexes come into more widespread use for almost everything.

    So, what would anyone else do in my position?

    [ I'm basically being hurried to make ammendments to settings to try and bring indexes into use - without reverting to HINTS and RULE mode operations. The business are howling for an immediate improvements of some processes. ]

    My gut feeling is to reduce multiblock read to something less than the current 16 and uplift optimizer_index_caching from the current default (0) to 20.

    Does anyone have any opinion encompassing the bringing into play of "optimizer_index_cost_adj" (which is currently set to the default of 100)?

    I thought I had another point to make from my findings, but can't remember what it was now, and being interested in earlier rather than later feedback, I think I'll end here and fire it off for comments.

    T.

  2. #2
    Join Date
    Sep 2002
    Location
    England
    Posts
    7,334
    have you generated some decent stats on the table / index?

  3. #3
    Join Date
    Nov 2001
    Location
    Sheffield, England, UK
    Posts
    78
    Stats are on.

    Computed, with histograms for indexed columns. Indexes also coalesced - and where significant free-space generated, rebuilt.

    The thing I was going to mention earlier (but forgot), most SELECTS are done via views that have IN-lists within the code. This is outside vendor programming which we are stuck with for the present.

    Consequently, I was of the assumption that we might alleviate scattered reads by increasing index caching.

    Taken a look at most table blocks in the buffer cache that were read in as part of a full table scan. For the table concerned have traced full table scans and the (average) multiblock read was 4, 7, 10 respectively. Only the smallest of tables could actually achieve 16 block reads - presumably are they are available contiguously on disk.

    T.

  4. #4
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    You can experiment with different values for db_file_multiblock_read_count parameter and choose
    the optimum value.

    Below is the one test:

    Code:
    SQL> col plan_plus_exp format a99
    SQL> set lines 100 pages 55 wrap off
    SQL>  alter session set optimizer_mode = choose;
    Session altered.
    SQL> alter session set db_file_multiblock_read_count= 8 ;
    Session altered.
    
    SQL> alter session set optimizer_index_caching= 50   ;
    Session altered.
    
    SQL> alter session set optimizer_index_cost_adj = 50 ;
    Session altered.
    
    SQL> set autot trace expla
    SQL>  select * from t2
      2   where  object_id between 10  and  109;
    
    Execution Plan
    ---------------------------------------------------------------------------- 
    rows will be truncated
       0 SELECT STATEMENT Optimizer=CHOOSE (Cost=97 Card=192,bytes=18432)                          
       1    0   TABLE ACCESS (BY INDEX ROWID) OF 'T2' (Cost=97 Card=192,Bytes=18432)                    
       2    1     INDEX (RANGE SCAN) OF 'T2_INDEX' (NON-UNIQUE) (Cost=2,Card=192)     
    
    SQL> set autot off
    SQL> alter session set db_file_multiblock_read_count= 16 ;
    
    Session altered.
    
    SQL> set autot trace expla
    SQL> select * from t2
      2  where  object_id between 10  and  109;
    
    Execution Plan
    ----------------------------------------------------------
    rows will be truncated
    
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=63 Card=192,Bytes=18432)
       1    0   TABLE ACCESS (FULL) OF 'T2' (Cost=63 Card=192,Bytes=18432) 
    
    SQL> spool off

    Also, choose the correct values for optimizer_index_caching
    and optimizer_index_cost_adj.

    Tamil
    Last edited by tamilselvan; 02-04-2005 at 11:45 AM.

  5. #5
    Join Date
    Nov 2000
    Location
    greenwich.ct.us
    Posts
    9,092

    Re: Db_file_multiblock_read_count

    Originally posted by AJW_ID01

    So, what would anyone else do in my position?


    Does anyone have any opinion encompassing the bringing into play of "optimizer_index_cost_adj" (which is currently set to the default of 100)?

    If I were in your position, I would run a quick test at the session level with optimizer_index_cost_adj around 25 or so. I'd run some of the queries that would obviously use the index and see if that helped (Make sure they are reparsed). I suspect you will get more bang for the buck with this setting.

    If not, decrease the dbfmbrc to 8. Personally, I like knowing that IF a FTS has to happen it will happen 16 (or ever 32) blocks at a time.

    Mileage may vary.
    Jeff Hunter

  6. #6
    Join Date
    Nov 2001
    Location
    Sheffield, England, UK
    Posts
    78
    Hmmm,

    Is there some kind of logical/systematic approach we can bring to the usage of optimizer_index_cost_adj?

    I was pretty much thinking that it's a blatant "user_interventionist" approach to forcing index use, that can potentially depart us from full table scans altogether by forcing the optimizer away from them.

    I've set it to a very marginal '90' at present, so at least the optimizer is getting a nudge in the right direction for index usage. I'm curious to know how everyone else approaches this parameter though. In my case, the slight intervention PLUS the downgrading of dbfmrd appears to turn the tide against FTS, but I'm unsure as to whether to take any further steps at this time.

    [Have just bounced the instance during Sunday morning agreed outage, and will get to see overnight tonight if things are any better, and get to re-review performance this week.]

    Is 25 a generically prudent choice?

    If so, why on earth does it default to 100?

    - T.

  7. #7
    Join Date
    Aug 2002
    Location
    Colorado Springs
    Posts
    5,253
    Originally posted by AJW_ID01
    Is 25 a generically prudent choice?

    If so, why on earth does it default to 100?
    Yes, and "I have no idea".

    Actually in 9i there is a functionality for collecting system (ie. hardware) statistics that supercedes this parameter (if they have been colected then this parameter is ignored). It's accessed by running the DBMS_STATS.GATHER_SYSTEM_STATS procedure which is documented in the Supplied packages book http://tahiti.oracle.com

    All the same it's good to have a fallback value for that parameter, and 25 is pretty reasonable.
    David Aldridge,
    "The Oracle Sponge"

    Senior Manager, Business Intelligence Development
    XM Satellite Radio
    Washington, DC

    Oracle ACE

  8. #8
    Join Date
    Nov 2000
    Location
    greenwich.ct.us
    Posts
    9,092
    Originally posted by AJW_ID01

    Is there some kind of logical/systematic approach we can bring to the usage of optimizer_index_cost_adj?

    You have to know your data and how it is used. This parameter is well documented on tahiti.oracle.com and asktom.oracle.com.

    I was pretty much thinking that it's a blatant "user_interventionist" approach to forcing index use, that can potentially depart us from full table scans altogether by forcing the optimizer away from them.

    Force is such a harsh word. You're just giving the optimizer more information about your data which it can use to adjust its choices.


    I've set it to a very marginal '90' at present, so at least the optimizer is getting a nudge in the right direction for index usage. I'm curious to know how everyone else approaches this parameter though. In my case, the slight intervention PLUS the downgrading of dbfmrd appears to turn the tide against FTS, but I'm unsure as to whether to take any further steps at this time.

    Not sure I would have downgraded dfmbrc...



    Is 25 a generically prudent choice?

    For an OLTP system, yes. For a DW, probably not.


    If so, why on earth does it default to 100?
    Because the default settings are usually set for a "general purpose" database.
    Jeff Hunter

  9. #9
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    ========
    Stats are on.

    Computed, with histograms for indexed columns. Indexes also coalesced - and where significant free-space generated, rebuilt.

    The thing I was going to mention earlier (but forgot), most SELECTS are done via views that have IN-lists within the code. This is outside vendor programming which we are stuck with for the present.

    =======

    Use 10053 events to trace the SQL that has IN-lists. You will know why the optimizer chooses FTS. Or you post the trace here. We will examine.

    Tamil

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width