DBAsupport.com Forums - Powered by vBulletin
Page 1 of 4 123 ... LastLast
Results 1 to 10 of 38
  1. #1
    Join Date
    Jan 2002
    2 questions:

    1. Does statscre come with ORacle software??? Why the documentation ask us to run @%ORACLE_HOME%\rdbms\admin\statscre if it's not even come with the software???

    2. I followed the documentation and it's not working

    Please advise

  2. #2
    Join Date
    Oct 2000
    Saskatoon, SK, Canada
    what is your db version? it could be spcreate.sql


    Life is a journey, not a destination!

  3. #3
    Join Date
    Nov 2000

    In 8.1.6, it was statscre.sql. In 8.1.7, it's spcreate.
    Jeff Hunter
    Get Firefox!
    "I pledge to stop eating sharks fin soup and will not do so under any circumstances."

  4. #4
    Join Date
    Jan 2002
    Thanks Jeff as always

    Thanks Sam, it's been a while since the last time you were here

    I am running, I found SPCREATE scripts;however, I still have problem.

    I connect as INTERNAL/MANAGER using SQLPLUS. It ran and it automatically exit from SQLPLUS. I check in the database and the user PERFSTAT is not there

    Could you please tell me what happened ???

    [Edited by ashley75 on 03-12-2002 at 04:14 PM]

  5. #5
    Join Date
    Jan 2002
    Jeff or Sam,

    Do you have any idea of what happened ?

  6. #6
    Join Date
    Nov 2000
    The statspack scripts are notorious for having a WHENEVER SQLERR EXIT type clause in them which misbehaves. It probably tried to drop a table which wasn't there, saw it was an error, and exited without finishing the script. I would suggest you look at the spcreate script, understand what it is doing, and tie that back to the error you received.
    Jeff Hunter
    Get Firefox!
    "I pledge to stop eating sharks fin soup and will not do so under any circumstances."

  7. #7
    Join Date
    Oct 2000
    Saskatoon, SK, Canada

    1.1.  Database Space Requirements
      The amount of database space required by the package will vary considerably
      based on the frequency of snapshots, the size of the database and instance,
      and the amount of data collected (which is configurable).
      It is therefore difficult to provide general storage clauses and space
      utilization predictions which will be accurate at each site.
      Note:  The default initial and next extent size is 1MB or 5MB for all
             tables and indexes which contain changeable data.  The minimum
             default tablespace requirement is approximately 45MB.
      Dictionary Managed Tablespaces
      If you install the package in a dictionary-managed tablespace, Oracle
      suggests you monitor the space used by the objects created, and adjust
      the storage clauses of the segments, if required.
      Locally Managed Tablespaces
      If you install the package in a locally-managed tablespace, storage
      clauses are not required, as the storage characteristics are
      automatically managed.
    1.2 Installing the Tool
    Step 1.
        This step creates the PERFSTAT user, which will own all PL/SQL code and
        database objects created (including the STATSPACK tables, constraints
        and the STATSPACK package).
        During the installation you will be prompted for the PERFSTAT
        user's default and temporary tablespaces.
        The default tablespace will be used to create all Statspack
        objects (such as tables and indexes).  The temporary tablespace
        will be used for sort-type activities (for more information on
        temporary tablespaces, see the Oracle Concepts Documentation).
          o  Oracle do not recommend using the SYSTEM tablespace to store
             statistics data.  A more appropriate tablespace is the TOOLS
             Similarly, do not use the SYSTEM tablespace as the Statspack
             user's TEMPORARY tablespace.
          o  During the installation, the dbms_shared_pool and dbms_job
             PL/SQL packages are created.  dbms_shared_pool is used to
             pin the Statspack package in the shared pool; dbms_job
             is created on the assumption the DBA will want to schedule
             periodic snapshots automatically using dbms_job.
        To install the package, either change directory to the ORACLE_HOME
        rdbms/admin directory, or fully specify the ORACLE_HOME/rdbms/admin
        directory when calling the installation script, spcreate.
        To run the installation script, you must use SQL*Plus and connect as
        a user with SYSDBA privilege.  Do not use Server Manager (svrmgrl)
        to install Statspack, as the installation will fail.
        e.g.  Start SQL*Plus, then:
          on Unix:
            SQL>  connect / as sysdba
            SQL>  @?/rdbms/admin/spcreate
          on NT:
            SQL>  connect / as sysdba
            SQL>  @%ORACLE_HOME%\rdbms\admin\spcreate
        The spcreate install script runs 3 other scripts - you do not need to
        run these - the scripts are called automatically:
          1.  spcusr  ->  creates the user and grants privileges
          2.  spctab  ->  creates the tables
          3.  spcpkg  ->  creates the package
        Check each of the three output files produced (spcusr.lis,
        spctab.lis, spcpkg.lis) by the installation to ensure no
        errors were encountered, before continuing on to the next step.
        Note that there are two ways to install Statspack - interactively (as
        shown above), or in "batch" mode; batch mode is useful when you do
        not wish to be prompted for the PERFSTAT user's default and
        temporary tablespaces.
          Batch mode installation
          To install in batch mode, you must assign values to the SQL*Plus
          variables which specify the default and temporary tablespaces before
          running spcreate.
          The variables are:
            default_tablespace   -> for the default tablespace
            temporary_tablespace -> for the temporary tablespace
            on Unix:
              SQL>  connect / as sysdba
              SQL>  define default_tablespace='tools'
              SQL>  define temporary_tablespace='temp'
              SQL>  @?/rdbms/admin/spcreate
          spcreate will no longer prompt for the above information.
    Step 2.
        The setup phase is now complete.
        If you wish to, you may decide to change the password of the
        PERFSTAT user for security purposes.
    1.3 Errors during installation
        A common error made during Statspack installation is running the install
        script from Server Manager (svrmgrl) rather than from SQL*Plus.  If you
        use svrmgrl, the installation will fail.  To correctly install Statspack
        after such errors, first run the de-install script, then the install
        script.  Both scripts must be run from SQL*Plus.
        e.g.  Start SQL*Plus, connect as a user with SYSDBA privilege, then:
           SQL> @spdrop
           SQL> @spcreate
    I'ld try to run the scripts one by one. from the spcreate.sql. Also I'ld agree to the Jeff's point


    Life is a journey, not a destination!

  8. #8
    Join Date
    Jan 2002
    In 8.1.6 , we have statspack.sql for taking snapshot purpose. What is the name of the file in 8.1.7 ???

    Can someone give me the documentation about statspack in 8.1.7, please . I only have the doc for 8.1.6 and it's so different comparing with 8.1.7


  9. #9
    Join Date
    Oct 2000
    Saskatoon, SK, Canada
    As many would know that I'm an opponent of posting long text on the forums, but out of no choice, I'm doing this now!

    2.   Gathering data - taking a snapshot
    The simplest interactive way to take a snapshot is to login to SQL*Plus
    as the PERFSTAT user, and execute the procedure statspack.snap:
          SQL>  connect perfstat/perfstat
          SQL>  execute statspack.snap;
      Note:  In an OPS environment, you must connect to the instance
             you wish to collect data for.
    This will store the current values for the performance statistics
    in the STATSPACK tables, and can be used as a baseline snapshot
    for comparison with another snapshot taken at a later time.
    For better performance analysis, set the init.ora parameter timed_statistics
    to true;  this way, Statspack data collected will include important timing
    information.  The timed_statistics parameter is also dynamically changable
    using the 'alter system' command.  Timing data is important and is usually
    required by Oracle support to diagnose performance problems.
    2.1 Automating statistics gathering
      To be able to make comparisons of performance from one day, week or
      year to the next, there must be multiple snapshots taken over a period
      of time.
      The best method to gather snapshots is to automate the collection on
      a regular time interval.  It is possible to do this:
        - within the database, using the Oracle dbms_job procedure to
          schedule the snapshots
        - using Operating System utlities (such as 'cron' on Unix or 'at' on
          NT) to schedule the snapshot
    2.2.  Using dbms_job
      To use an Oracle-automated method for collecting statistics, you can use
      dbms_job.  A sample script on how to do this is supplied in spauto.sql,
      which schedules a snapshot every hour, on the hour.
      You may wish to schedule snapshots at regular times each day to reflect your
      system's OLTP and/or batch peak loads. For example take snapshots at 9am,
      10am, 11am, 12 midday and 6pm for the OLTP load, then a snapshot at
      12 midnight and  another at 6am for the batch window.
      In order to use dbms_job to schedule snapshots, the job_queue_processes
      initialization parameter must be set to greater than 0 in the init.ora
      file for the job to be run automatically.
      Example of an init.ora entry:
        #  Set to enable the job queue process to start.  This allows dbms_job
        #  to schedule automatic statistics collection using STATSPACK
      If using spauto.sql in OPS enviroment, the spauto.sql script must
      be run once on each instance in the cluster.  Similarly, the
      job_queue_processes parameter must also be set for each instance.
      Changing the interval of statistics collection
      To change the interval of statistics collection use the dbms_job.interval
        execute dbms_job.interval(1,'SYSDATE+(1/48)');
      Where 'SYSDATE+(1/48)' will result in the statistics being gathered each
      1/48 hours (i.e. every half hour).
      To force the job to run immediately,
        execute dbms_job.run();
      To remove the autocollect job,
        execute dbms_job.remove();
      For more information on dbms_job, see the Supplied Packages Reference
    3.   Running a Performance report
    Once snapshots are taken, it is possible to generate a performance report.
    The SQL script which generates the report prompts for the two snapshot id's
    to be processed.
    The first will be the beginning snapshot id, the second will be the
    ending snapshot id.  The report will then calculate and print ratios,
    increases etc. for all statistics between the two snapshot periods, in
    a similar way to the BSTAT/ESTAT report.
      Note:  It is not correct to specify begin and end snapshots where the
             begin snapshot and end snapshot were taken from different
             instance startups.  In other words, the instance must not have
             been shutdown between the times that the begin and end snapshots
             were taken.
             The reason for this requirement is the database's dynamic
             performance tables which Statspack queries to gather the data
             are memory resident, hence shutting down the database will
             reset the values in the performance tables to 0.  As Statspack
             subtracts the begin-snapshot statistics from the end-snapshot
             statistics, the resulting output will be invalid.
             If begin and end snapshots which were taken between shutdowns
             are specified in the report, the report shows an appropriate error
             to indicate this.
    Separating the phase of data gathering from producing a report, allows the
    flexibility of basing a report on any data points selected.  For example
    it may be reasonable for the DBA to use the supplied automation script to
    automate data collection every hour on the hour; If at some later point
    a performance issue arose which may be better investigated by looking
    at a three hour data window rather than an hour's worth of data, the
    only thing the DBA need do, is specify the required start point and end
    point when running the report.
    3.1  Running the report
      To examine the change in statistics between two time periods, the
      spreport.sql file is executed while being connected to the PERFSTAT
      user.  The spreport.sql command file is located in the rdbms/admin
      directory of the Oracle Home.
      Note:  In an OPS environment you must connect to the instance you
             wish to report on.
      You will be prompted for:
        1. The beginning snapshot Id
        2. The ending    snapshot Id
        3. The name of the report text file to be created
        e.g. on Unix
          SQL>  connect perfstat/perfstat
          SQL>  @?/rdbms/admin/spreport
        e.g. on NT
          SQL>  connect perfstat/perfstat
          SQL>  @%ORACLE_HOME%\rdbms\admin\spreport
        Example output:
        SQL>  connect perfstat/perfstat
        SQL>  @spreport
       DB Id    DB Name      Inst Num Instance
    ----------- ------------ -------- ------------
     2618106428 PRD1                1 prd1
    Completed Snapshots
                               Snap                    Snap
    Instance     DB Name         Id   Snap Started    Level Comment
    ------------ ------------ ----- ----------------- ----- ----------------------
    prd1         PRD1             1 11 May 2000 12:07     5
                                  2 11 May 2000 12:08     5
    Specify the Begin and End Snapshot Ids
    Enter value for begin_snap: 1
    Begin Snapshot Id specified: 1
    Enter value for end_snap:
    End   Snapshot Id specified: 2
    Specify the Report Name
    The default report file name is sp_1_2  To use this name,
    press  to continue, otherwise enter an alternative.
    Enter value for report_name: 
    Using the report name sp_1_2
      The report will now scroll past, and also be written to the file
      specified (e.g. sp_1_2.lis).
          Batch mode report generation
          To run a report without being prompted, assign values to the
          SQL*Plus variables which specify the begin snap id, the end snap id
          and the report name before running spreport.
          The variables are:
            begin_snap   -> specifies the begin Snapshot Id
            end_snap     -> specifies the end   Snapshot Id
            report_name  -> specifies the Report output name
            on Unix:
              SQL>  connect perfstat/perfstat
              SQL>  define begin_snap=1
              SQL>  define end_snap=2
              SQL>  define report_name=batch_run
              SQL>  @?/rdbms/admin/spreport
          spreport will no longer prompt for the above information.
    3.2. Gathering Optimizer statistics on the PERFSTAT schema
      For best performance when running spreport, collect optimizer statistics
      for tables and indexes owned by the PERFSTAT.  This should be performed
      whenever significant change in data volumes in PERFSTAT's tables.
      The easiest way to do this, is either to use dbms_utility, or dbms_stats,
      and specify the PERFSTAT user:
          execute dbms_utility.analyze_schema('PERFSTAT','COMPUTE');
          execute dbms_stats.gather_schema_stats('PERFSTAT');
    4.  Configuring the amount of data captured
    4.1. Snapshot Level
      It is possible to change the amount of information gathered by the package,
      by specifying a different snapshot 'level'.  In other words, the level
      chosen (or defaulted) will decide the amount of data collected.
      The higher the snapshot level, the more data is gathered.  The default
      level set by the installation is level 5.  The various levels are
      explained in detail section 4.4 below.
    4.2. Snapshot SQL thresholds
      There are other parameters which can be configured in addition to the
      snapshot level.
      These parameters are used as thresholds when collecting data on SQL
      statements; data will be captured on any SQL statements that breach
      the specified thresholds.
      Snapshot level and threshold information used by the package is stored
      in the stats$statspack_parameter table.
    4.3. Changing the default values for Snapshot Level and SQL Thresholds
      If you wish to, you can change the default parameters used for taking
      snapshots, so that they are tailored to the instance's workload.
      You can do this either by:
      o  Taking a snapshot, and specifying the new defaults to be saved to the
         database (using statspack.snap, and using the i_modify_parameter
         input variable).
         SQL>  execute statspack.snap -
               (i_snap_level=>10, i_modify_parameter=>'true');
         Setting the i_modify_parameter value to true will save the new
         thresholds in the stats$statspack_parameter table; these thresholds
         will be used for all subsequent snapshots.
         If the i_modify_parameter was set to false or if it were omitted, the
         new parameter values would not be saved.  Only the snapshot taken at
         that point will use the specified values, any subsequent snapshots will
         use the preexisting values in the stats$statspack_parameter table.
         The full list of parameters which can be passed into snap
         are listed in 4.6. below
      o  Changing the defaults immediately without taking a snapshot, using the
         statspack.modify_statspack_parameter procedure.  For example to change
         the snapshot level to 10, and the SQL thresholds for buffer_gets and
         disk_reads, the following statement can be issued:
         SQL>  execute statspack.modify_statspack_parameter -
                (i_snap_level=>10, i_buffer_gets_th=>10000, i_disk_reads_th=>1000);
         This procedure changes the values permananently, but does not
         take a snapshot.
         The full list of parameters which can be passed into the
         modify_statspack_parameter procedure are the same as those for
         the snap procedure.  These are listed in 4.6. below.
    4.4 Snapshot Levels - details
     Levels  >= 0   General performance statistics
        Statistics gathered:
        This level and any level greater than 0 collects general
        performance statistics, such as: wait statistics, system events,
        system statistics, rollback segment data, row cache, SGA,
        background events, session events, lock statistics,
        buffer pool statistics, parent latch statistics.
     Levels  >= 5  Additional data:  SQL Statements
        This level includes all statistics gathered in the lower level(s),
        and additionally gathers the performance data on high resource
        usage SQL statements.
        In a level 5 snapshot, note that the time required for the snapshot
        to complete is dependant on the shared_pool_size and on the number of
        SQL statements in the shared pool at the time the snapshot is taken:
        the larger the shared pool, the longer the time taken to complete
        the snapshot.
        SQL 'Thresholds'
           The SQL statements gathered by Statspack are those which exceed one of
           four predefined threshold parameters:
            - number of executions of the SQL statement            (default 100)
            - number of disk reads performed by the SQL statement  (default 1,000)
            - number of parse calls performed by the SQL statement (default 1,000)
            - number of buffer gets performed by the SQL statement (default 10,000)
            - size of sharable memory used by the SQL statement    (default 1m)
            - version count for the SQL statement                  (default 20)
           The values of each of these threshold parameters are used when
           deciding which SQL statements to collect - if a SQL statement's
           resource usage exceeds any one of the above threshold values, it
           is captured during the snapshot.
           The SQL threshold levels used are either those stored in the table
           stats$statspack_parameter, or by the thresholds specified when
           the snapshot is taken.
     Levels  >= 10 Additional statistics:  Parent and Child latches
        This level includes all statistics gathered in the lower levels, and
        additionally gathers Parent and Child Latch information.  Data gathered
        at this level can sometimes cause the snapshot to take longer to complete
        i.e. this level can be resource intensive, and should only be used
        when advised by Oracle personnel.
    4.5.  Specifying a Session Id
      If you would like to gather session statistics and wait events for a
      particular session (in addition to the instance statistics and wait events),
      it is possible to specify the session id in the call to Statspack.  The
      statistics gathered for the session will include session statistics,
      session events and lock activity.  The default behaviour is to not to
      gather session level statistics.
        SQL>  execute statspack.snap(i_session_id=>3);
    4.6.  Input Parameters for the SNAP and MODIFY_STATSPACK_PARAMETERS procedures
       Parameters able to be passed in to the statspack.snap and
       statspack.modify_statspack_parameter procedures are as follows:
                        Range of      Default
    Parameter Name      Valid Values  Value    Meaning
    ------------------  ------------  -------  -----------------------------------
    i_snap_level        0, 5, 10      5        Snapshot Level
    i_ucomment          Text          Blank    Comment to be stored with Snapshot
    i_executions_th     Integer >=0   100      SQL Threshold: number of times
                                               the statement was executed
    i_disk_reads_th     Integer >=0   1,000    SQL Threshold: number of disk reads
                                               the statement made
    i_parse_calls_th    Integer >=0   1,000    SQL Threshold: number of parse
                                               calls the statement made
    i_buffer_gets_th    Integer >=0   10,000   SQL Threshold: number of buffer
                                               gets the statement made
    i_sharable_mem_th   Integer >=0   1048576  SQL Threshold: amount of sharable
    i_version_count_th  Integer >=0   20       SQL Threshold: number of versions
                                               of a SQL statement
    i_session_id        Valid sid     0 (no    Session Id of the Oracle Session
                        from          session) to capture session granular
                        v$session              statistics for
    i_modify_parameter  True, False   False    Save the parameters specified for
                                               future snapshots?
    5.  Time Units used for Wait events
    Although Oracle records wait time in hundredth's of a second (i.e.
    centiseconds), some timings in the report (especially IO times) are
    converted to milliseconds to allow easier comparison with Operating
    System monitoring utilities which often report timings in milliseconds.
    For clarity, the time units used are specified in the column headings of
    each timed column.  The convention used is:
          (cs) - a centisecond - which is  100th of a second
          (ms) - a millisecond - which is 1000th of a second
    6.  Event Timings
    If timings are available, the Statspack report will order wait events by time
    (in the Top-5 and background and foreground Wait Events sections).
    If timed_statistics is false for the instance, however a subset of users or
    programs set timed_statistics set to true dynamically, the Statspack report
    output may look inconsistent, where some events have timings (those which the
    individual programs/users waited for), and the remaining events do not.
    The Top-5 section will also look unusual in this situation.
    Optimally, timed_statistics should be set to true at the instance level for
    ease of diagnosing performance problems.
    7.  Managing and Sharing performance data
    7.1. Sharing data via export
      If you wish to share data with other sites (for example if Oracle
      Support requires the raw statistics), it is possible to export
      the PERFSTAT user.
      An export parameter file (spuexp.par) has been supplied for this
      purpose.  To use this file, supply the export command with the
      userid parameter, along with the export parameter file name.
        exp userid=perfstat/perfstat parfile=spuexp.par
      This will create a file called spuexp.dmp and the log file
      If you wish to load the data into another database, use the import
      command.  For more information on using export and import, please
      see the Oracle Utilities manual.
    7.2. Purging/removing unnecessary data
      It is possible to purge unnecessary data from the PERFSTAT schema using
      sppurge.sql.   This script deletes snapshots which fall between
      the begin and end range of Snapshot Id's specified.
        It is recommended you export the schema as a backup before running this
        script, either using your own export parameters, or those provided in
        Purging may require the use of a large rollback segment, as all data
        relating each Snapshot Id to be purged will be deleted.
        To avoid rollback segment extension errors, explicitly use a large
        rollback segment.  This can be done by executing the 'set transaction
        use rollback segment..' command before running the sppurge.sql script
        (for more information on the set transaction command see the SQL reference
        manual).  Alternatively, to avoid rollback segment extension errors
        specify a smaller range of Snapshot Id's to purge.
      When sppurge is run the the instance currently connected to along with
      the available snapshots are displayed.  The DBA is then prompted for the
      low Snap Id and high Snap Id.  All snapshots which fall within this
      range will be purged.
      e.g. Purging data - connect to PERFSTAT using SQL*Plus, then run the
           script - an example output appears below.
       SQL>  connect perfstat/perfstat
       SQL>  set transaction use rollback segment rbig;
       SQL>  @sppurge
       Database Instance currently connected to
          DB Id    DB Name    Inst Num Name
       ----------- ---------- -------- ----------
         720559826 PERF              1 perf
       Snapshots for this database instance
          Snap Id Level Snapshot Started      Host            Comment
       ---------- ----- --------------------- --------------- -------------------
                1     5  30 Feb 2000 10:00:01 perfhost
                2     5  30 Feb 2000 12:00:06 perfhost
                3     5  01 Mar 2000 02:00:01 perfhost
                4     5  01 Mar 2000 06:00:01 perfhost
       sppurge.sql deletes all snapshots ranging between the lower and
       upper bound Snapshot Id's specified, for the database instance
       connected to.
       You may wish to export this data before continuing.
       Specify the Lo Snap Id and Hi Snap Id range to purge
       Enter value for losnapid: 1
       Using 1 for lower bound.
       Enter value for hisnapid: 2
       Using 2 for upper bound.
       Deleting snapshots 1 - 2
       Purge of specified Snapshot range complete.  If you wish to ROLLBACK
       the purge, it is still possible to do so.  Exitting from SQL*Plus will
       automatically commit the purge.
       SQL> -- end of example output
          Batch mode purging
          To purge in batch mode, you must assign values to the SQL*Plus
          variables which specify the low and high snapshot Ids to purge.
          The variables are:
            losnapid   -> Begin Snapshot Id
            hisnapid   -> End Snapshot Id
              SQL>  connect perfstat/perfstat
              SQL>  define losnapid=1
              SQL>  define hisnapid=2
              SQL>  @sppurge
          sppurge will no longer prompt for the above information.
    7.3. Truncating all data
      If you wish to truncate all performance data indiscriminantly, it is
      possible to do this using sptrunc.sql  This script truncates all
      statistics data gathered.
      It is recommended you export the schema as a backup before running this
      script either using your own export parameters, or those provided in
      e.g. Truncating all data - connect to PERFSTAT using SQL*Plus, and run
           the script - an example is below
        SQL>  connect perfstat/perfstat
        SQL>  @sptrunc
        About to Truncate Statspack Tables
        Running sptrunc.sql removes ALL data from Statspack tables
        You may wish to export the data before continuing
        If you would like to continue, enter any string, followed by 
        Enter value for anystring:
        entered - starting truncate operation
        Table truncated.
        Truncate operation complete
    8.  New and Changed Features
    New Statistics on the Summary page
      o  connections at the begin snapshot and connections at the end snapshot
      Load Profile
      o  executes per transaction and per second
      o  logons per transaction and per second
      Instance Efficiency
      o  % Non-Parse CPU: which is the parse time CPU / CPU used by this session
      o  Parse CPU to Parse Elapsd%: which is the parse time CPU / parse time
      o  Execute to Parse %:  The ratio of executions to parses
      Instance Efficiency - Shared Pool Statistics are shown for the begin and
      end snapshots.
      o  Memory Usage %:  The percenage of the shared pool which is used.
      o  % SQL with executions>1:  The percentage of reused SQL (i.e. the
         percentage of SQL statements with more than one execution).
      o  % Memory for SQL w/exec>1:  The percentage of memory used for SQL
         statements with more than one execution.
      This data is newly gathered by the 8.1.7 Statspack for level 5 snapshots
      and above, and so will not evident if the report is run against older
      data captured using the 8.1.6 Statspack.
    Tablespace and File IO
      o  Tempfile statistics are now catpured.  The statistics for tempfiles are
         shown in the same sections with statitics for datafiles and tablespaces.
      o  The tablespace and File IO reports have been modified to include reads/s
         and writes/s.
      o  The report has been modified to include parent and child latch
         sections, which only appears in the report when a level 10 snapshot
         is taken.
    New Scripts
      o  sppurge.sql - Purges a range of Snapshot Ids
      o  sptrunc.sql - Deletes all data
      o  spup816.sql - Upgrades an 8.1.6 Statspack to the 8.1.7 schema
    Batch Mode execution
      o  The installation, reporting and purge scripts (spcreate.sql, spreport.sql
         and sppurge.sql) have been modified so they can be run in batch mode, if
         the appropriate SQL*Plus variables are defined before the scripts are run.
      o  Two new SQL thresholds (and sections in the report) have been added:
         sharable_mem and version_count
      o  The report which was previously ordered by rows processed has been
         changed to be ordered by executions
      o  The full text of a SQL statement is now captured (previously only the
         first 1000 bytes of the text was captured); the text is captured once
         only.  Previously, Statspack gathered all SQL related information,
         including all the SQL text for each snapshot.  The new strategy will
         result less space usage.
      o  The first 5 lines of a SQL statement are shown in each SQL report
         (rather than the first line)
    File Rename
      o  The Statspack files have been renamed, with all files now beginning
         with the prefix sp.
         The new and old file names are given below.  For more information on
         the purpose of each file, please see the Supplied Scripts Overview
         New Name       Old Name
         ------------   -------------
         spdoc.txt      statspack.doc
         spcreate.sql   statscre.sql
         spreport.sql   statsrep.sql
         spauto.sql     statsauto.sql
         spuexp.par     statsuexp.par
         sppurge.sql    - new file -
         sptrunc.sql    - new file -
         spup816.sql    - new file -
         spdrop.sql     statsdrp.sql
         spcpkg.sql     statspack.sql
         spctab.sql     statsctab.sql
         spcusr.sql     statscusr.sql
         spdtab.sql     statsdtab.sql
         spdusr.sql     statsdusr.sql
      o  The default Statspack report output file name prefix has been modified
         to sp_ (was st_) to be consistent with the new script names.
    9.  Upgrading from earlier releases
    A script is provided which converts performance data in an existing 8.1.6
    Statspack schema to the 8.1.7 schema format.  Although data conversion
    is not a supported activity, this script has been provided as a
    convenient way of keeping previously captured Statspack data.  Due to
    the difference in schema layout, minor irregularities may result in
    statistics captured before the conversion.
        There is no downgrade script as the Statspack shipped with 8.1.7 is
        fully compatible with Oracle release 8.1.6.
        If you would like the option of being able to regress to using
        the 8.1.6 Statspack, you must export the PERFSTAT schema before
        running the upgrade, as re-importing the schema is the only downgrade
    Please note that you can use the 8.1.7 Statspack in an 8.1.6 database.
    If you need to downgrade the database to 8.1.6 after an 8.1.7 upgrade,
    it is not necessary to downgrade to the 8.1.6 Statspack, as Statspack
    shipped with 8.1.7 is fully compatible with an 8.1.6 instance.  If you
    wish to continue using the Statspack shipped with 8.1.7, you should use
    the 8.1.7 Statspack scripts (e.g. spreport.sql) from the 8.1.7 executable.
    Before running the upgrade script, export the statspack schema, then disable
    any scripts which use Statspack, as these will interfere with the upgrade.
    For example, if you use a dbms_job to gather statistics, disable this job
    for the duration of the upgrade.
    If there is a large volume of data in the Statspack schema (i.e. a large
    number of snapshots with large number of long SQL statements in
    stats$sql_summary), to avoid a long upgrade time or avoid an
    unsuccessful upgrade, ensure:
     - you specify a large rollback segment when prompted
     - you specify a large (e.g. 1048576) sort_area_size when prompted
     - there is enough free space in PERFSTAT's default tablespace
       To loosely estimate the required free space, run the following SQL
       statement while connected as PERFSTAT in SQL*Plus:
         select 1.3*sum(bytes)/1024/1024 est_space_mb
           from dba_segments
          where segment_name in ('STATS$SQL_SUMMARY','STATS$SQL_SUMMARY_PK');
       The est_space_mb column will give you a guestimate as to the required
       free space, in megabytes.
       The larger the SQL statements in the sql_summary table, the more space will
       be released after the upgrade is complete.
    The upgrade script will prompt you for the rollback segment and sort_area_size
      e.g. To upgrade - first disable any programs which use Statspack, then
           connect as a user with SYSDBA privilege:
        SQL>  connect / as sysdba
        SQL>  @spup816
    Once the upgrade script completes, check the log file (spup816.lis) for
    errors.  If no errors are evident, re-enable any Statspack data collection or
    reporting scripts which were previously disabled.
    10. OPS specific considerations
    The unique identifier for a database instance used by Statspack is the
    dbid and the instance_number.  When using OPS it is possible the
    instance_number may change between startups (either because the
    instance_number parameter is set in the init.ora file, or because the
    instances are started in a different order).
    In this case, as Statspack uses the instance_number and the dbid to identify
    the instance's snapshot preferences, it is important to note that this may
    inadvertantly result in a different set of levels or thresholds being
    used when snapshotting an instance.
    There are three conditions which must be met for this to occur:
     - the instance numbers must have switched between startups
     - the DBA must have modified the default Statspack parameters used for
       at least one of the instances
     - the parameters used (e.g. thresholds and snapshot level) must not be
       the same on all instances
    Note that the only way the parameters will differ is if the parameters
    have been explicitly modified by the DBA after installation, either by
    saving the specified values or by using the modify_statspack_parameter
    It is easy to check whether any of the Statspack snapshot parameters are
    different for the instances by querying the STATS$STATSPACK_PARAMETER table.
      If you have changed the default Statspack parameters you may
      wish to avoid encountering this problem by hard-coding the instance_number
      in the init.ora parameter file for each of the instances in the OPS
      database - this will avoid encountering this problem.
      For recommendations and issues with setting the instance_number init.ora
      parameter, please see the Oracle Parallel Server documentation.
    11.  Conflicts and differences compared to UTLBSTAT/UTLESTAT
    11.1. Running BSTAT/ESTAT in conjunction to Statspack
    If you choose to run BSTAT/ESTAT in conjunction to Statspack, do not do
    run both as the same user, as there is a table name conflict - this table
    is stats$waitstat.
    11.2. Differences between Statspack and BSTAT/ESTAT
    Statspack considers a transaction to either finish with a commit or a
    rollback, and so calculates the number of transactions thus:
      'user commits' + 'user rollbacks'
    BSTAT/ESTAT considers a transaction to complete with a commit only, and
    so assumes that transactions = 'user commits'
    For this reason, comparing per transaction statistics between Statspack and
    BSTAT/ESTAT may result in significantly different per transaction ratios.
    12.  Removing the package
    To deinstall the package, connect as a user with SYSDBA privilege and run
    the following script from SQL*Plus:  spdrop
          SQL>  connect / as sysdba
          SQL>  @spdrop
    This script actually calls 2 other scripts:
          1.  spdtab  ->  Drops tables and public synonyms
          2.  spdusr  ->  Drops the user
    Check each of two output files produced (spdtab.lis, spdusr.lis)
    to ensure the package was completely deinstalled.
    13.  Supplied Scripts Overview
      Must be run as a user with SYSDBA privilege
            spcreate.sql    ->  Creates entire Statspack environment (calls
                                spcusr.sql, spctab.sql, spcpkg.sql)
            spdrop.sql      ->  Drops entire Statspack environment (calls
                                spdtab.sql, spdusr.sql)
      Are run as a user with SYSDBA priv by the calling scripts (above)
            spdtab.sql      ->  Drops Statspack tables
            spdusr.sql      ->  Drops the Statspack user (PERFSTAT)
      Are run as PERFSTAT by the calling scripts (above)
            spcusr.sql      ->  Creates the Statspack user (PERFSTAT)
            spctab.sql      ->  Creates Statspack tables
            spcpkg.sql      ->  Creates the Statspack package
    Reporting and Automation
      Must be run as PERFSTAT
            spreport.sql    ->  Generates a Statspack report
            spauto.sql      ->  Automates Statspack statistics collection
                                (using dbms_job)
      Must be run as PERFSTAT
           spup816.sql      ->  Converts data from the 8.1.6 schema to the
                                newer (8.1.7) schema.  Backup the existing schema
                                before running the upgrade.
    Performance Data Maintenance
      Must be run as PERFSTAT
            sppurge.sql     ->  Purges a limited range of Snapshot Id's for
                                a given database instance.
            sptrunc.sql     ->  Truncates all Performance data in Statspack tables
                                WARNING - Do not use unless you wish to remove
                                          all data in the schema you are using.
                                          You may choose to export the data
                                          as a backup before using this script.
            spuexp.par      ->  An export parameter file supplied for exporting
                                the whole PERFSTAT user.
      Should be read by the DBA running the scripts
            spdoc.txt       ->  This file contains instructions and
                                documentation on the STATSPACK package.
    14.  Limitations and Modifications
    14.1.  Limitations
      Statspack does not work with releases earlier than 8.1.6, as the data
      collected includes data from views only available in 8.1.6.
      Storing data from multiple databases in one PERFSTAT user account is
      currently not supported.  These and other features may be provided in
      future releases.
      As this is the first production Statspack release, the schema may change;
      backward compatibility is not guaranteed.
    14.2.  Modifications
      All Statspack code is Oracle proprietry and must not be modified.  Any
      modifications made to Statspack software will render the the code and
      data captured thereafter unsupported; unsupported changes may result in
      errors in data capture or reporting.  Instead, please request enhancements
      against Statspack.

    Life is a journey, not a destination!

  10. #10
    Join Date
    Jan 2002

    thanks so much for the doc, but where do you get it from ???

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts

Click Here to Expand Forum to Full Width

We have made updates to our Privacy Policy to reflect the implementation of the General Data Protection Regulation.