We have a table wich increases by 100 millions records/year. We put some indexes on this table to improve the performances of the queries(Select), but this table is considered as write-intensive(Insert,Update) as well. The two processus(reading and writing) are done simultaneously and we can not separate them (OLTP, Datawarehouse). Thousands of Insert statments are executed on the database and Oracle have to balance after each insert the B-Tree index causing an overhead.
Partitioning is not an good option for highly OLTP database.
You need to identify top resource-consuming
SQL statements and see if they can benefit from partitioning corresponding tables. The types of queries that benefit
from table partitioning include full table scans, joins between several large tables and index range scans, which return
relatively large percentage of rows. On the other hand when the index scan returns only a handful of rows the table
partitioning will not bring significant benefits, because these few rows will be accessed by rowid. The latter situation
can be a candidate for index partitioning even when the table itself is not partitioned. The main performance gain is
in I/O reduction while traversing the B*Tree indexes. Only indexes on very large tables can produce sizable gains,
because the I/O reduction can only occur when the height of the index is reduced.
This is perfect example for wrong design, i.e both OLTP and DW applications sitting on the same database. If you can't separate the application, consider the following options:
1 Based on your input, the number of rows inserted in the table per second is 3.17 ( I would say 4). If you know how many end users are inserting in this table, then increase INITTRAN parameter.
2 I do not think partition may help in this case unless the application inserts different values in the partitioned column. I think it would be serial (continous) value. For example a sequence may be used for the PK column. If that is the case, you can reverse the index. It will maintain a balanced B-Tree index.
3 Increase Freelist for the table.
4 If the BLOCK SIZE is 8K, change it to 32K.
5 Always create the extents for this data and index segments
6 Ensure that the data tablespaces (data files) are under one disk controller ( or many controllers) and the index tablespaces (data files) are under different disk controllers. This will eliminate I/O bottlenecks at the controller level.
7 Increase the redo log file size to 64MB.
8 Ensure the checkpoint occurs every 30 minutes ( 45 minutes)
9 Increase data buffer cache, if possible
10 Increase sort area size
11 Tune your SQL (DW) . Add hints where ever possible . Do not allow CBO to choose the access paths.
12 Rebuild the indexes every Sunday ( 15 days once).
I have created functions that search new record into existing millions of records.but it takes more time in searhcing and displaying the result. i have created indexes.
we also apply concept of jobs but jobs works sequentially on a single server and searching time is almost same.
For improve performance.
What should I do?