DBAsupport.com Forums - Powered by vBulletin
Results 1 to 5 of 5

Thread: sql stats

  1. #1
    Join Date
    Jun 2001
    Charlotte, NC
    Is their any combination of fields from the V$SQL or V$SQLAREA tables, which can give you any ideas as to whether sql is performing badly or must you use SQL_TRACE and then run TKPROF to give an accurate picture of SQL which consumes a high amount of resources

  2. #2
    Join Date
    May 2000
    fremont ca
    I also have 24/7 database and want to find out what time which sql was running and perfoming badly.

  3. #3
    Join Date
    Aug 2001
    Waterloo, On
    You can have general statistics about your database performance from V$SYSSTAT, waits from V$SYSTEM_EVENT and latch contention etc. but if you want to investigate the performance of specefic sql statments, you have to use trace and timed statistics.
    Session level statistics can be gathered from V$SESSTAT, V$SESSION_EVENT etc and you can find out which sessions are performing poorly. Once you know that, you can set trace on that specefic session to further pinpoint faulty sql, if any, instead of using trace for entire database.

    Raminder Singh

    Oracle Certified DBA: Oracle 8i, 9i

    Mail me at raminderahluwalia@rediffmail.com.

  4. #4
    Join Date
    Jun 2001
    Helsinki. Finland
    Originally posted by Kumud
    I also have 24/7 database and want to find out what time which sql was running and perfoming badly.

    Here you are:

    select b.username, a.buffer_gets reads,
    a.executions exec, a.buffer_gets / decode
    (a.executions, 0, 1, a.executions) rds_exec_ratio,
    a.command_type, a.sql_text Statement
    from v$sqlarea a, dba_users b
    where a.parsing_user_id = b.user_id
    and a.buffer_gets > 100000
    order by a.buffer_gets desc;

  5. #5
    Join Date
    Aug 2001
    You could also run statspack with an appropriate sample period eg hourly

    You could also right a little scheduled sampler an run the SQL from Julian once per hour or 1/2 hour.

    Both can be controlled from within the database via Job scheduling.

    Statspack is probably the way to go as over time (a couple of weeks) you will have extensive historial data which to query. Dump the data (in csv) into excel or similar and bingo! you can visually see the badtimes, the lowtimes and the in-between times
    You will also have some historial info on the worst waits (which are the most important thing after bad SQL).

    Have Fun

    Performance... Push the envelope!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts

Click Here to Expand Forum to Full Width