I have a couple of queries and would be grateful for all your inputs as this is a bit new to me and I've just joined one week back.
We have a datawarehousing database and we use a third party tool for ETL,where our web server logs are imported to the database on a daily basis , I believe using sql loader.
But these imports seem to take a lot of time, like upto 5 - 7 hours. We need to reduce this time. The db size right now is about 28GB.
Then , a couple of tables are really large, about 60 million rows. So, I'm thinking about enabling partitioning. But does this play any role when the loading is done.
And if I try to analyze one of these tables, after a very long time, it comes with an error of unable to extend temp tablespace ....., when really there will be enough space. SREDDY , in his mail did mention about defragmenting the tablespace, but this is a temp tablespace and even if I do
ALTER TABLESPCE temp COALESCE, it does'nt improve the situation.
I saw that the sort_area_size is set to 2087152. DO you think this is enough or should I increase this value.
Could all of you out there help me find a solution to this ?
Thanks a lot
Run a utlbstat/utlestat report to find out wait events. Accordingly, make modifications which may include:
1. increasing db buffers
2. increasing redo buffer size
3. increasing redo log file size
4. increasing sort_area_size
1. I think your sort_area_size is too small. The objective is to perform as much sorting in memery as possible, else sorting will take place in the temp ts. Remember that allocating extents is a CPU intensive job. Increase the extent sizes of the temp ts. Set Autoextend ON for the temp ts OR set a maxsize limitation
2. Make sure the rbs are large enough with large extent sizes, and use Autoextend or maxsize limitation
2. If the server has multiple CPUs, make use of Parallel DML/Query
3. Enable multiple db writer processes/io slaves
4. You can set the table to Nologging before loading the data. This will not generate any redo and will help speed up the import process. Once the data has been loaded, change the table back to logging. Keep in mind that if you do this, since no redo was generated, in the case of recovery, the data cannot be recovered
Hope this helps
I havent as yet run the utlbstat/utlestat report. But
the other parameters set for the database are:
db_block_size = 8192
db_writer_processes = 1
dbwr_io_slaves = 10
shared_pool_size = 9000000
log_buffer = 163840
log_files = 255
I havent as yet run the utlbstat/utlestat report.
Thanks a lot for your input, Halo.
Oh, you can only user either multiple db_writer_processes or dbwr_io_slaves, not both. If you specify both, it'll only use the parameter db_writer_processes.
If you are working with 8i you can reduce the import by atleast 4 hours .
pls inform ???
Halo, both the parameters are set . Is it that only db_writer_processes will be read by oracle. I'm not very sure of how this works. What would you recommend ?
And omegamark, we're using 8.0.6. I used to use 8i in my previous company but right now I'm going backward. But I really would like to know how that's done in 8i , maybe I can use it in the future projects.
for the big and very big tables create a new tablespace and move all these under them .
make the tablespaces read only..
make them transportable tablespaces
make a copy of them means backup..
after doing that drop the transporatbles tablespaces that you have done...
take the user export ...
import the dropped tablespaces by using the transporable option in both the databases ...
import the other small tables in the other database..
this will do your job in no time...
so will save 4 hours minimum...
Thanks omegemark. We are using a third party tool to do the importing (sqlloader) to the database and we have no control over the import process as this import process is as an executable. Maybe once I get a hang of all this and what it is exactly doing, I could write my own script.
Thanks a lot anyway.
It's important to know what method exactly is used for import (you mentioned sql*loader but you are probably not sure)
If it's some application (or sql/plus code) then program logic may decipher the speed of import.
Also import will depend on couple of more issues like --
-- Network bottleneck (is the load file on the same server as the database ?)
-- Are there any Active indexes on the tables being loaded by the import ?
sql loader is not a third party tool btw
it seems to me that ur sharedpool is extremely small, only 9MB?
If you use sql loader consider using direct loads
Check v$system_event for wait events
Click Here to Expand Forum to Full Width