DBAsupport.com Forums - Powered by vBulletin
Results 1 to 9 of 9

Thread: Advice on Massive batch job

  1. #1
    Join Date
    Nov 2000
    Posts
    416
    In our production Database we have a massive job ( Which cosist of couple of stored procedures with lots of cursors and DML apply to millions rows range tables ) which must be run every month.

    Tomorrow is the day we run it !! I was asking to monitor the process as a DBA and gives some recommendations to improve the process time and report bottlenecks. The problem is I don’t have very good idea what I have to do!!

    Can you explain for me in these type of task assignment what would you usually monitor and thinking of ?

    Here is the list of some of the things I think it’s important to look at during this process :

    - Review stored procedures to see how frequent commit happens, explain plan the cursors and evaluate indexe utilizations and FTS

    - look at v$sql to see what’s SQL Statements run most of time during process and focus on them

    - Look at all hit ratios ( may be more on sort and db buffer ratios )

    - look at machine ( We have Oracle 8.0.5 running on NT 4.0 SP5 ) and see how hard drive and CPU doing

    Please verify these are important things or not and what else I am missing and bottom line what’s actions return most gain.


    An ounce of prevention is worth a pound of cure

  2. #2
    Join Date
    Aug 2001
    Location
    chennai,bangalore
    Posts
    840
    Since ur using 8.0.5 make sure you have a large RBS segment.This can be done before u run ur batch job.Use set transaction use rollback segment RBS1.

    The points you listed should also be checked.
    anandkl

  3. #3
    Join Date
    Nov 2000
    Location
    greenwich.ct.us
    Posts
    9,092
    Since you're on 8.0.5, I would run utlbstat before the job runs and utlestat after it runs. This should help you with the health of the instance.

    In addition, just after the job starts, I would start a trace on it so I could spend most of my time analyzing the queries that take the most time to execute.
    Jeff Hunter

  4. #4
    Join Date
    Mar 2002
    Location
    Mesa, Arizona
    Posts
    1,204
    Looks like you've got the obvious...

    Increase your sort_area_size temporarily.

    Here's a long shot .. are you creating datafiles or autogrowing in 8.0.x? Those took FOREVER with UTF8 character set.

    If you're spitting the results out to a log file, you might want to watch the file on your screen with a cool little utility for windows called tail. http://rhazes.com It works like "tail -f" in unix. As lines are added to the output file, they're also displayed on your console.

    Best of Luck,

    -Ken
    "I do not fear computers. I fear the lack of them." Isaac Asimov
    Oracle Scirpts DBA's need

  5. #5
    Join Date
    Mar 2002
    Location
    Mesa, Arizona
    Posts
    1,204
    Can you anticipate any auto-extending and allocate the growth before you kick off the process?

    Maybe that'll help.
    "I do not fear computers. I fear the lack of them." Isaac Asimov
    Oracle Scirpts DBA's need

  6. #6
    Join Date
    Nov 2000
    Location
    greenwich.ct.us
    Posts
    9,092
    If this were me, I would concentrate my efforts on the queries involved.
    Jeff Hunter

  7. #7
    Join Date
    Sep 2000
    Posts
    384
    if possible run the database in noarchivelog ..
    ensure that you do not have the message checkpoint not complete in the alert log
    add two more logs files in the database

    Radhakrishnan.M

  8. #8
    Join Date
    May 2000
    Location
    fremont ca
    Posts
    182
    create table for job_log and
    like
    job_seq_no, - sequence
    job_ts - sysdate
    job_name - procedure name
    job_start_time
    job_end_time
    Tot_rec_process - No of records processed

    update store procedure
    in starting add job start time and procedure name in job_log

    and when job end update job_log table with end time and no of records process.
    This will give you fair idea about how much time it takes for how many records and then you can improve performance.
    Also you can create error log table and capture all errors in error_log tables and continue your jobs this is also done within store proc. Also when error comes it should email notification

  9. #9
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    You don't have much time to tune the batch job. I will write it in another time.

    However, you can follow certain important points:
    1 Maintain 2 init.ora files - one for OLTP and another for BATCH job processing

    2 Increase SORT_AREA_SIZE to 128MB.

    3 Create temporary segment as per the formula given below:
    INITIAL=NEXT=(N*SORT_AREA_SIZE)+BLOCK_SIZE
    Set pctincrease to 0
    Set temp tablespace to nologging

    4 Create 2 big rollback segments with huge INITIAL and NEXT size; put all other rollback segments offline.

    5 If possible, set the database in NOARCHIVE LOG

    6 Ensure that you have minimum 6 REDO log files of size not less than 200MB.

    7 If your tablespaces are not RAW devices, disable DISK_ASYNC_IO, and configure multiple DB_WRITER_PROCESSES. (Atlease the number should be equal to number of CPUs).

    8 Set LOG_BUFFER_SIZE to 2 MB.

    9 Ensure checkpoint occurs only when the redo log file switches.

    10 Precreate all extents for the tables that are highly inserted

    11 Drop unwanted indexes.

    12 Kill all inactive sessions before the batch job is started.

    There are plenty of points I can write, but I have to go now.

    In fact, on Mar 22, I am going to watch and tune a 72-hour batch jobs.

    Good Luck.
    Tamilselvan

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width