DBAsupport.com Forums - Powered by vBulletin
Page 1 of 2 12 LastLast
Results 1 to 10 of 16

Thread: db file sequential read for table access

  1. #1
    Join Date
    Aug 2002
    Posts
    56

    db file sequential read for table access

    Hi,
    I had a user complain about a query which was hanging(they had to cancel it after 8hours). The database is 1og R2 on a Solaris box with 32G mem and 8CPUs. The query uses one view which I figured was taking up most of the time

    Below is the query for the view:
    Code:
    SELECT   peop.person_id, ass.assignment_id, pee.element_entry_id,
                MAX (peop.full_name) AS full_name,
                MAX (DECODE (piv.NAME, 'Rank', peev.screen_entry_value, NULL)
                    ) AS RANK,
                MAX (DECODE (piv.NAME, 'Vessel', peev.screen_entry_value, NULL)
                    ) AS vessel,
                MAX (DECODE (piv.NAME, 'Days', peev.screen_entry_value, NULL)
                    ) AS days_worked,
                MAX (DECODE (piv.NAME,
                             'Date', TO_DATE (peev.screen_entry_value,
                                              'YYYY/MM/DD HH24:MI:SS'
                                             ),
                             NULL
                            )
                    ) AS date_worked
           FROM pay_element_entries_f pee,
                pay_element_entry_values_f peev,
                pay_input_values_f_tl piv,
                per_all_assignments_f ass,
                (SELECT *
                   FROM per_all_people_f
                  WHERE effective_end_date =
                                            TO_DATE ('31-DEC-4712', 'DD-MON-YYYY')) peop
          WHERE pee.element_entry_id = peev.element_entry_id
            AND peev.input_value_id = piv.input_value_id
            AND piv.input_value_id IN (3220, 3221, 3222, 3223)
         --  AND peev.screen_entry_value IS NOT NULL
            AND pee.assignment_id = ass.assignment_id
            AND ass.person_id = peop.person_id
            AND ass.effective_end_date =
                   (SELECT MAX (sass.effective_end_date)
                      FROM hr.per_all_assignments_f sass
                     WHERE sass.person_id = peop.person_id
                       AND sass.assignment_id = ass.assignment_id)
       GROUP BY peop.person_id, ass.assignment_id, pee.element_entry_id;
    And here is the explain plan

    Code:
    --------------------------------------------------------------------------------------------------------
    | Id  | Operation                              | Name                          | Rows  | Bytes | Cost  |
    --------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT                       |                               |     1 |   212 |   713 |
    |   1 |  HASH GROUP BY                         |                               |     1 |   212 |   713 |
    |   2 |   VIEW                                 |                               |     1 |   212 |   713 |
    |   3 |    FILTER                              |                               |       |       |       |
    |   4 |     HASH GROUP BY                      |                               |     1 |   223 |   713 |
    |   5 |      TABLE ACCESS BY INDEX ROWID       | PAY_ELEMENT_ENTRIES_F         |     1 |    23 |     3 |
    |   6 |       NESTED LOOPS                     |                               |     1 |   223 |   712 |
    |   7 |        NESTED LOOPS                    |                               |     7 |  1400 |   691 |
    |   8 |         MERGE JOIN CARTESIAN           |                               |     1 |   172 |   618 |
    |   9 |          TABLE ACCESS BY INDEX ROWID   | PER_ALL_ASSIGNMENTS_F         |     1 |    30 |     1 |
    |  10 |           NESTED LOOPS                 |                               |     1 |   142 |   615 |
    |  11 |            NESTED LOOPS                |                               |   135 | 15120 |   479 |
    |  12 |             TABLE ACCESS FULL          | PER_ALL_PEOPLE_F              |    28 |  2352 |   346 |
    |  13 |             TABLE ACCESS BY INDEX ROWID| PER_ALL_ASSIGNMENTS_F         |     5 |   140 |     6 |
    |  14 |              INDEX RANGE SCAN          | PER_ASSIGNMENTS_F_N12         |     5 |       |     1 |
    |  15 |            INDEX RANGE SCAN            | PER_ASSIGNMENTS_F_N12         |     5 |       |     1 |
    |  16 |          BUFFER SORT                   |                               |     4 |   120 |   617 |
    |  17 |           INLIST ITERATOR              |                               |       |       |       |
    |  18 |            TABLE ACCESS BY INDEX ROWID | PAY_INPUT_VALUES_F_TL         |     4 |   120 |     4 |
    |  19 |             INDEX RANGE SCAN           | PAY_INPUT_VALUES_F_TL_PK      |     4 |       |     3 |
    |  20 |         TABLE ACCESS BY INDEX ROWID    | PAY_ELEMENT_ENTRY_VALUES_F    |    29 |   812 |    73 |
    |  21 |          INDEX RANGE SCAN              | PAY_ELEMENT_ENTRY_VALUES_F_N1 |    40 |       |    62 |
    |  22 |        INDEX RANGE SCAN                | PAY_ELEMENT_ENTRIES_F_PK      |     1 |       |     2 |
    --------------------------------------------------------------------------------------------------------
    I figured creating an index on PER_ALL_PEOPLE_F table will help. Here is the explain plan after the index creation:

    Code:
    --------------------------------------------------------------------------------------------------------
    | Id  | Operation                              | Name                          | Rows  | Bytes | Cost  |
    --------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT                       |                               |     1 |   212 |   377 |
    |   1 |  HASH GROUP BY                         |                               |     1 |   212 |   377 |
    |   2 |   VIEW                                 |                               |     1 |   212 |   377 |
    |   3 |    FILTER                              |                               |       |       |       |
    |   4 |     HASH GROUP BY                      |                               |     1 |   223 |   377 |
    |   5 |      TABLE ACCESS BY INDEX ROWID       | PAY_ELEMENT_ENTRIES_F         |     1 |    23 |     3 |
    |   6 |       NESTED LOOPS                     |                               |     1 |   223 |   376 |
    |   7 |        NESTED LOOPS                    |                               |     7 |  1400 |   355 |
    |   8 |         MERGE JOIN CARTESIAN           |                               |     1 |   172 |   282 |
    |   9 |          TABLE ACCESS BY INDEX ROWID   | PER_ALL_ASSIGNMENTS_F         |     1 |    30 |     1 |
    |  10 |           NESTED LOOPS                 |                               |     1 |   142 |   278 |
    |  11 |            NESTED LOOPS                |                               |   135 | 15120 |   142 |
    |  12 |             TABLE ACCESS BY INDEX ROWID| PER_ALL_PEOPLE_F              |    28 |  2352 |     9 |
    |  13 |              INDEX RANGE SCAN          | TEST                          |    28 |       |     1 |
    |  14 |             TABLE ACCESS BY INDEX ROWID| PER_ALL_ASSIGNMENTS_F         |     5 |   140 |     6 |
    |  15 |              INDEX RANGE SCAN          | PER_ASSIGNMENTS_F_N12         |     5 |       |     1 |
    |  16 |            INDEX RANGE SCAN            | PER_ASSIGNMENTS_F_N12         |     5 |       |     1 |
    |  17 |          BUFFER SORT                   |                               |     4 |   120 |   281 |
    |  18 |           INLIST ITERATOR              |                               |       |       |       |
    |  19 |            TABLE ACCESS BY INDEX ROWID | PAY_INPUT_VALUES_F_TL         |     4 |   120 |     4 |
    |  20 |             INDEX RANGE SCAN           | PAY_INPUT_VALUES_F_TL_PK      |     4 |       |     3 |
    |  21 |         TABLE ACCESS BY INDEX ROWID    | PAY_ELEMENT_ENTRY_VALUES_F    |    29 |   812 |    73 |
    |  22 |          INDEX RANGE SCAN              | PAY_ELEMENT_ENTRY_VALUES_F_N1 |    40 |       |    62 |
    |  23 |        INDEX RANGE SCAN                | PAY_ELEMENT_ENTRIES_F_PK      |     1 |       |     2 |
    --------------------------------------------------------------------------------------------------------
    But there was something else I noticed when I ran the level 12 trace for the query. There were a lot of db file sequential read event :
    db file sequential read 90213 0.18 92.71


    On checking the raw trace file the object# for the wait was pointing to a table:


    OWNER OBJECT_ID OBJECT_NAME OBJECT_TYP
    ----- ---------- ----------------------------------- ----------
    HR 42003 PAY_ELEMENT_ENTRIES_F TABLE
    HR 42061 PAY_ELEMENT_ENTRIES_F_PK INDEX
    HR 42063 PAY_ELEMENT_ENTRY_VALUES_F TABLE
    HR 42081 PAY_ELEMENT_ENTRY_VALUES_F_N1 INDEX


    Can someone explain why there is db file sequential read event for table access and what else can be done to improve the query performance.

    Any pointers is appreciated.

    Thanks
    Last edited by gandolf989; 12-12-2006 at 02:26 PM.
    When in doubt ...go to the basics!!

  2. #2
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    When did you last analyzed the tables and indexes?

    DB_FILE_SEQUENTIAL_READ means table rows are retrieved using index lookup.

    I see many single column index in the explain plan that may not be good in this case. Also, merge join cartesian.

    Try this:

    (SELECT rownu rn, a.*
    FROM per_all_people_f a
    WHERE effective_end_date =
    TO_DATE ('31-DEC-4712', 'DD-MON-YYYY'))

    With the rownum, oracle will materialize the inline view, and probably come out with different plan.

    Tamil
    www.beaconinfotechcorp.com

  3. #3
    Join Date
    Aug 2002
    Posts
    56
    I think Oracle goes for the merge join cartesian as it sees low cardinality. Maybe not the best option in this case. This is the explain plan when I implement the above suggestions. It seems to get worse


    Code:
    -----------------------------------------------------------------------------------------------------
    | Id  | Operation                           | Name                          | Rows  | Bytes | Cost  |
    -----------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT                    |                               |     1 |   190 |  1374 |
    |   1 |  HASH GROUP BY                      |                               |     1 |   190 |  1374 |
    |   2 |   FILTER                            |                               |       |       |       |
    |   3 |    HASH JOIN                        |                               |     1 |   190 |  1368 |
    |   4 |     VIEW                            |                               |    28 |  3556 |   346 |
    |   5 |      COUNT                          |                               |       |       |       |
    |   6 |       TABLE ACCESS FULL             | PER_ALL_PEOPLE_F              |    28 |   896 |   346 |
    |   7 |     HASH JOIN                       |                               |   537 | 33831 |  1022 |
    |   8 |      TABLE ACCESS BY INDEX ROWID    | PAY_ELEMENT_ENTRIES_F         |     1 |    11 |     3 |
    |   9 |       NESTED LOOPS                  |                               |   114 |  5130 |   607 |
    |  10 |        NESTED LOOPS                 |                               |   114 |  3876 |   264 |
    |  11 |         INLIST ITERATOR             |                               |       |       |       |
    |  12 |          TABLE ACCESS BY INDEX ROWID| PAY_INPUT_VALUES_F_TL         |     4 |    72 |     5 |
    |  13 |           INDEX RANGE SCAN          | PAY_INPUT_VALUES_F_TL_PK      |     4 |       |     4 |
    |  14 |         TABLE ACCESS BY INDEX ROWID | PAY_ELEMENT_ENTRY_VALUES_F    |    29 |   464 |    73 |
    |  15 |          INDEX RANGE SCAN           | PAY_ELEMENT_ENTRY_VALUES_F_N1 |    40 |       |    62 |
    |  16 |        INDEX RANGE SCAN             | PAY_ELEMENT_ENTRIES_F_PK      |     1 |       |     2 |
    |  17 |      TABLE ACCESS FULL              | PER_ALL_ASSIGNMENTS_F         | 51713 |   909K|   414 |
    |  18 |    SORT AGGREGATE                   |                               |     1 |    18 |       |
    |  19 |     VIEW                            | index$_join$_007              |     1 |    18 |     5 |
    |  20 |      HASH JOIN                      |                               |       |       |       |
    |  21 |       INDEX RANGE SCAN              | PER_ASSIGNMENTS_F_PK          |     1 |    18 |     3 |
    |  22 |       INDEX RANGE SCAN              | PER_ASSIGNMENTS_F_N12         |     1 |    18 |     3 |
    -----------------------------------------------------------------------------------------------------
    Last edited by gandolf989; 12-12-2006 at 02:24 PM.
    When in doubt ...go to the basics!!

  4. #4
    Join Date
    Aug 2002
    Posts
    56
    Hi All,
    Please update. Any help will be appreciated.

    Thanks
    When in doubt ...go to the basics!!

  5. #5
    Join Date
    Oct 2006
    Location
    Mumbai
    Posts
    184
    Dear Bhatia,

    Try creating Indxes on the following column:

    PER_ALL_ASSIGNMENTS_F

    Revert back with explain plan and performance.

  6. #6
    Join Date
    Feb 2003
    Location
    INDIA
    Posts
    96
    Hi,

    Analyze all tables involved with DBMS_STATS, also use CASCADE => 'TRUE' for all indexes on the tables to analyze. The CBO will take care for choosing best execution path.

    Dilipkumar Patel.

  7. #7
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    Quote Originally Posted by Dilippatel
    Hi,

    Analyze all tables involved with DBMS_STATS, also use CASCADE => 'TRUE' for all indexes on the tables to analyze. The CBO will take care for choosing best execution path.

    Dilipkumar Patel.
    Best is not always "optimal".

    Tamil

  8. #8
    Join Date
    Aug 2002
    Posts
    56
    Paresh,
    You have not mentioned the column name in your post.

    Dilip,
    This is a production system and the stats are generated on a very regular basis throught the FND_STATS concurrent program. To give you an idea look at the query below.

    OWNER TABLE_NAME LAST_ANALYZED NUM_ROWS AVG_ROW_LEN
    ------------------------------ ------------------------------ --------------- ---------- -----------
    HR PAY_ELEMENT_ENTRIES_F 16-DEC-06 3216263 102
    HR PAY_INPUT_VALUES_F_TL 16-DEC-06 10675 52
    HR PER_ALL_ASSIGNMENTS_F 16-DEC-06 52227 158
    HR PER_ALL_PEOPLE_F 16-DEC-06 27313 316
    When in doubt ...go to the basics!!

  9. #9
    Join Date
    Mar 2000
    Location
    Atlanta, GA,USA
    Posts
    155
    Show us the result of the following query:
    SELECT count(*)
    FROM per_all_people_f
    WHERE effective_end_date =
    TO_DATE ('31-DEC-4712', 'DD-MON-YYYY');

    Is this an OLTP of DSS database?

  10. #10
    Join Date
    Aug 2002
    Posts
    56
    This is an OLTP system using Oracle Applications 11i
    SQL> SELECT count(*)
    FROM apps.per_all_people_f
    WHERE effective_end_date =
    TO_DATE ('31-DEC-4712', 'DD-MON-YYYY'); 2 3 4

    COUNT(*)
    ----------
    18020
    When in doubt ...go to the basics!!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width