I have a serious performance issue and would like to receive some feedback from and dba who have similar experinces with oracle.
We are running:
Sune450 4 CPU 2GB Ram 250GB Hard Drive SPACE.
Oradata 212 GB DISK Raid 0 Striped.
Have a database 16 K block size.
I have 1 major table, it is 6 columns long, each columns HAS to have an index, for search capabilites. And the table accepts 52 Million records a day. Funny enough, Oracle is having trouble accepting this amount of records. Commits are happening every 100,000 rows. After 5 days (to keep it simple) and 260 million records, I have to then remove the OLDEST 52 million records. Therefore, manplating 104 Million records per day.
I have set the tables and indexes on two different Local Tablespaces, NOLOGGING on tablespaces and Tables (Not that make any difference on normal DML.. when will Oracle realise that NOLOGGING should really be NOLOGGING. My database in in NOARCHIVELOG mode also.
Now both TABLESPACE are on the ONE mount point, but remember this single mount point is RAID 0 STRIPED, and spreads the I/O evenly.
Can somebody offer any suggestions in how to increase insert performance on this ONE table.
Not realy enough info
What is your pct_free and pct_used set to on your table, try to increase pct_free.
What other contentions are you experianceing ie I/O or latches.
How large are you log_files.
What version of database are you running?
Have you tried partitioning.
pando - yeah, I've been noticing that about the global/local. Obviously, I'd be looing to go LOCAL, but determining if this is feasible or not is something else.
truma1 Have you investigated using a RAW system as aposed to cooked. This would speed up access.
I haven't had much experience with the ol RAW file system, and I've done a bit of reading that suggest the perfomace difference between RAW and COOKED is decreasing all the time, plus manipulation of the RAW files can't be done via normal UNIX cammands.
marist89 Where's the bottleneck? CPU/IO/Memory/Swap?
Jeff, I'm still trying to determine this issue. And will run some more stats tomorrow at work. At present I'm concentrating on the application side, but was looking for any other suggestions to handle this volume.
I have a DB similar to what you have. My DB has 10-15 million inserts a day and I did the following to get performance.
1. Parallel inserts (Increases the use of multiple CPUs)
2. Set LOG_CHECKPOINT_INTERVAL to a huge setting, thus turning it off
3. Set LOG_CHECKPOINT_TIMEOUT to 0, thus turning it off
4. PCT_USED to 0 (This gets rid of freelists)
5. Redo Logs on RAID 0 mount points
I would greatly increase your DB_Block_Buffers and Log_buffers. Watch out for redo latch contention and redo log I/O contention.
I like to use OEM, "iostat -xnc 3", "vmstat 3" and top to diagnos my problems.
______________________ Applications come and go,
but the data remains!
Following are my suggestions; you may not follow all of them. Study them and carefully implement them with your system admin.
1 Use parallel query where ever possible. Remember that parallel query is effective only when the box has more than one CPU and the data files on which the table resides are more than one disk.
2 If more than one CPU in the box and the inserts are coming from multiple sources, set FREELIST for the table = Number of CPUs.
3 Use RAID 0 ( striping the data file)
4 Pre-create extents for the table. This will help Oracle not to figure out space as and when needed.
5 Never put the REDO log files on RAID DISK Array. This will slow down very much, because Oracle writes Redo Log serially, not randomly. Put these files onto non-striped and non-mirrored disks.
6 Do not multiplex REDO log files both HW and SW levels.
7 Keep maximum memory for data buffer.
8 Keep the TEMP tablespace in a separate disk. It is advisable to maintain the data file onto non-striped and non-mirrored disks.
9 Separate INDEX and DATA table spaces’ data files under different controllers. Even if you use RAID 0, identify the controllers and place these data files carefully.
10 Identify each disk’s through put. The sum of disks’ throughput under one controller should not exceed the disk controller’s through. If more disks are configured under one controller, then the system will experience IO bottleneck.
11 IF you are using import to load data into the big table, use DIRECT=TRUE and PARALLEL options, also set the table in NOLOGGING mode.
12 Partition the table and Index. Of course in your case, the partition key is date. In 8I, you can add new partition and drop existing partition very easily.
13 Set LOG_CHECKPOINT_INTERVAL = 1000000000 and LOG_CHECKPOINT_TIMEOUT = 0. Remember the unit of LOG_CHECKPOINT_INTERVAL is number of OS blocks not the Oracle block. In Unix, it is normally 512 bytes.
14 Do not delete two days old rows when other sessions are inserting data into the table. Do it at nighttime.
15 Check with your system admin that the Unix is configured to ASYNC IO.
16 Always maintain 4 or 5 disks (non-striped and non-mirrored) for storing the export files / flat files. From here you should load the data into table.