DBAsupport.com Forums - Powered by vBulletin
Page 1 of 2 12 LastLast
Results 1 to 10 of 16

Thread: High table INSERT perfomance

  1. #1
    Join Date
    Apr 2001
    Location
    Brisbane, Queensland, Australia
    Posts
    1,203
    I have a serious performance issue and would like to receive some feedback from and dba who have similar experinces with oracle.

    We are running:

    Sune450 4 CPU 2GB Ram 250GB Hard Drive SPACE.
    Oradata 212 GB DISK Raid 0 Striped.

    Have a database 16 K block size.

    I have 1 major table, it is 6 columns long, each columns HAS to have an index, for search capabilites. And the table accepts 52 Million records a day. Funny enough, Oracle is having trouble accepting this amount of records. Commits are happening every 100,000 rows. After 5 days (to keep it simple) and 260 million records, I have to then remove the OLDEST 52 million records. Therefore, manplating 104 Million records per day.

    I have set the tables and indexes on two different Local Tablespaces, NOLOGGING on tablespaces and Tables (Not that make any difference on normal DML.. when will Oracle realise that NOLOGGING should really be NOLOGGING. My database in in NOARCHIVELOG mode also.

    Now both TABLESPACE are on the ONE mount point, but remember this single mount point is RAID 0 STRIPED, and spreads the I/O evenly.

    Can somebody offer any suggestions in how to increase insert performance on this ONE table.

    INIT parameters of interest.

    open_cursors = 300
    db_block_buffers = 35000

    shared_pool_size = 250000000

    large_pool_size = 15000000
    java_pool_size = 30000000

    log_buffer = 4194304

    db_block_size = 16384

    db_block_lru_latches = 4

    sort_area_size = 131072
    sort_area_retained_size = 65536

    Suggestions PLEASE anyone.

  2. #2
    Join Date
    May 2001
    Location
    Cape Town, South Africa
    Posts
    52
    Not realy enough info
    What is your pct_free and pct_used set to on your table, try to increase pct_free.
    What other contentions are you experianceing ie I/O or latches.
    How large are you log_files.
    What version of database are you running?
    Have you tried partitioning.

  3. #3
    Join Date
    Apr 2001
    Location
    Brisbane, Queensland, Australia
    Posts
    1,203

    What is your pct_free and pct_used set to on your table, try to increase pct_free.

    PCT_FREE 5 PCT USED 70.

    Increase the PCTFREE would only be helpful on UPDATES. The data in this table isn't updated. The FREELISTS are also set to 2, because there is only one process inserting the data.

    What other contentions are you experianceing ie I/O or latches.
    How large are you log_files.

    Redo log files are 250Mb

    What version of database are you running?

    8.1.7.1.B

    Have you tried partitioning.

    Thought about it over and over, but I beleive that once a partition is droped the indexes on the table need to be rebuild

  4. #4
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    the indexes doeesnt necessary be rebuilt if they are also partitioned, also depends if they are global index or local

  5. #5
    Join Date
    Nov 2000
    Location
    greenwich.ct.us
    Posts
    9,092
    Where's the bottleneck? CPU/IO/Memory/Swap?
    Jeff Hunter

  6. #6
    Join Date
    May 2001
    Location
    Cape Town, South Africa
    Posts
    52
    Have you investigated using a RAW system as aposed to cooked. This would speed up access.

  7. #7
    Join Date
    Apr 2001
    Location
    Brisbane, Queensland, Australia
    Posts
    1,203
    Hey guys.. thanks for your help.


    pando - yeah, I've been noticing that about the global/local. Obviously, I'd be looing to go LOCAL, but determining if this is feasible or not is something else.

    truma1 Have you investigated using a RAW system as aposed to cooked. This would speed up access.

    I haven't had much experience with the ol RAW file system, and I've done a bit of reading that suggest the perfomace difference between RAW and COOKED is decreasing all the time, plus manipulation of the RAW files can't be done via normal UNIX cammands.

    marist89 Where's the bottleneck? CPU/IO/Memory/Swap?

    Jeff, I'm still trying to determine this issue. And will run some more stats tomorrow at work. At present I'm concentrating on the application side, but was looking for any other suggestions to handle this volume.

  8. #8
    Join Date
    Apr 2001
    Posts
    219
    I have a DB similar to what you have. My DB has 10-15 million inserts a day and I did the following to get performance.

    1. Parallel inserts (Increases the use of multiple CPUs)
    2. Set LOG_CHECKPOINT_INTERVAL to a huge setting, thus turning it off
    3. Set LOG_CHECKPOINT_TIMEOUT to 0, thus turning it off
    4. PCT_USED to 0 (This gets rid of freelists)
    5. Redo Logs on RAID 0 mount points

    I would greatly increase your DB_Block_Buffers and Log_buffers. Watch out for redo latch contention and redo log I/O contention.

    I like to use OEM, "iostat -xnc 3", "vmstat 3" and top to diagnos my problems.
    ______________________
    Applications come and go,
    but the data remains!

  9. #9
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    How are the rows inserted into table?
    Is it through import utility or directly from the application?

  10. #10
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    Heavy Inserts in to a table:

    Following are my suggestions; you may not follow all of them. Study them and carefully implement them with your system admin.

    1 Use parallel query where ever possible. Remember that parallel query is effective only when the box has more than one CPU and the data files on which the table resides are more than one disk.
    2 If more than one CPU in the box and the inserts are coming from multiple sources, set FREELIST for the table = Number of CPUs.
    3 Use RAID 0 ( striping the data file)
    4 Pre-create extents for the table. This will help Oracle not to figure out space as and when needed.
    5 Never put the REDO log files on RAID DISK Array. This will slow down very much, because Oracle writes Redo Log serially, not randomly. Put these files onto non-striped and non-mirrored disks.
    6 Do not multiplex REDO log files both HW and SW levels.
    7 Keep maximum memory for data buffer.
    8 Keep the TEMP tablespace in a separate disk. It is advisable to maintain the data file onto non-striped and non-mirrored disks.
    9 Separate INDEX and DATA table spaces’ data files under different controllers. Even if you use RAID 0, identify the controllers and place these data files carefully.
    10 Identify each disk’s through put. The sum of disks’ throughput under one controller should not exceed the disk controller’s through. If more disks are configured under one controller, then the system will experience IO bottleneck.
    11 IF you are using import to load data into the big table, use DIRECT=TRUE and PARALLEL options, also set the table in NOLOGGING mode.
    12 Partition the table and Index. Of course in your case, the partition key is date. In 8I, you can add new partition and drop existing partition very easily.
    13 Set LOG_CHECKPOINT_INTERVAL = 1000000000 and LOG_CHECKPOINT_TIMEOUT = 0. Remember the unit of LOG_CHECKPOINT_INTERVAL is number of OS blocks not the Oracle block. In Unix, it is normally 512 bytes.
    14 Do not delete two days old rows when other sessions are inserting data into the table. Do it at nighttime.
    15 Check with your system admin that the Unix is configured to ASYNC IO.
    16 Always maintain 4 or 5 disks (non-striped and non-mirrored) for storing the export files / flat files. From here you should load the data into table.

    Closed mouths gather no feet.

    Tamilselvan



Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width