DBAsupport.com Forums - Powered by vBulletin
Page 1 of 2 12 LastLast
Results 1 to 10 of 15

Thread: Help/suggestions wanted from all of you

Hybrid View

  1. #1
    Join Date
    Sep 2000
    Posts
    103
    Hi guys,

    I have a couple of queries and would be grateful for all your inputs as this is a bit new to me and I've just joined one week back.
    We have a datawarehousing database and we use a third party tool for ETL,where our web server logs are imported to the database on a daily basis , I believe using sql loader.
    But these imports seem to take a lot of time, like upto 5 - 7 hours. We need to reduce this time. The db size right now is about 28GB.
    Then , a couple of tables are really large, about 60 million rows. So, I'm thinking about enabling partitioning. But does this play any role when the loading is done.
    And if I try to analyze one of these tables, after a very long time, it comes with an error of unable to extend temp tablespace ....., when really there will be enough space. SREDDY , in his mail did mention about defragmenting the tablespace, but this is a temp tablespace and even if I do
    ALTER TABLESPCE temp COALESCE, it does'nt improve the situation.
    I saw that the sort_area_size is set to 2087152. DO you think this is enough or should I increase this value.

    Could all of you out there help me find a solution to this ?

    Thanks a lot

    pst

  2. #2
    Join Date
    Apr 2000
    Location
    Baltimore, MD
    Posts
    759
    Run a utlbstat/utlestat report to find out wait events. Accordingly, make modifications which may include:
    1. increasing db buffers
    2. increasing redo buffer size
    3. increasing redo log file size
    4. increasing sort_area_size

    Other tips:
    1. I think your sort_area_size is too small. The objective is to perform as much sorting in memery as possible, else sorting will take place in the temp ts. Remember that allocating extents is a CPU intensive job. Increase the extent sizes of the temp ts. Set Autoextend ON for the temp ts OR set a maxsize limitation
    2. Make sure the rbs are large enough with large extent sizes, and use Autoextend or maxsize limitation
    2. If the server has multiple CPUs, make use of Parallel DML/Query
    3. Enable multiple db writer processes/io slaves
    4. You can set the table to Nologging before loading the data. This will not generate any redo and will help speed up the import process. Once the data has been loaded, change the table back to logging. Keep in mind that if you do this, since no redo was generated, in the case of recovery, the data cannot be recovered
    -
    Hope this helps

  3. #3
    Join Date
    Sep 2000
    Posts
    103
    Hi,

    I havent as yet run the utlbstat/utlestat report. But
    the other parameters set for the database are:

    db_block_buffers= 12800
    db_block_size = 8192
    db_writer_processes = 1
    dbwr_io_slaves = 10

    shared_pool_size = 9000000

    log_buffer = 163840
    log_files = 255

    I havent as yet run the utlbstat/utlestat report.

    Thanks a lot for your input, Halo.

  4. #4
    Join Date
    Apr 2000
    Location
    Baltimore, MD
    Posts
    759
    Oh, you can only user either multiple db_writer_processes or dbwr_io_slaves, not both. If you specify both, it'll only use the parameter db_writer_processes.

  5. #5
    Join Date
    Sep 2000
    Posts
    384
    If you are working with 8i you can reduce the import by atleast 4 hours .

    pls inform ???
    Radhakrishnan.M

  6. #6
    Join Date
    Sep 2000
    Posts
    103
    Halo, both the parameters are set . Is it that only db_writer_processes will be read by oracle. I'm not very sure of how this works. What would you recommend ?
    Thanks, Halo

    And omegamark, we're using 8.0.6. I used to use 8i in my previous company but right now I'm going backward. But I really would like to know how that's done in 8i , maybe I can use it in the future projects.

    Thanks again.

  7. #7
    Join Date
    Sep 2000
    Posts
    384
    for the big and very big tables create a new tablespace and move all these under them .

    make the tablespaces read only..

    make them transportable tablespaces

    make a copy of them means backup..

    after doing that drop the transporatbles tablespaces that you have done...

    take the user export ...

    import the dropped tablespaces by using the transporable option in both the databases ...

    import the other small tables in the other database..

    this will do your job in no time...

    so will save 4 hours minimum...


    Radhakrishnan.M

  8. #8
    Join Date
    Sep 2000
    Posts
    103
    Thanks omegemark. We are using a third party tool to do the importing (sqlloader) to the database and we have no control over the import process as this import process is as an executable. Maybe once I get a hang of all this and what it is exactly doing, I could write my own script.
    Thanks a lot anyway.

  9. #9
    Join Date
    Oct 2000
    Location
    Dallas:TX:USA
    Posts
    407
    It's important to know what method exactly is used for import (you mentioned sql*loader but you are probably not sure)
    If it's some application (or sql/plus code) then program logic may decipher the speed of import.

    Also import will depend on couple of more issues like --

    -- Network bottleneck (is the load file on the same server as the database ?)
    -- Are there any Active indexes on the tables being loaded by the import ?

    - Rajeev


  10. #10
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    sql loader is not a third party tool btw
    it seems to me that ur sharedpool is extremely small, only 9MB?

    If you use sql loader consider using direct loads

    Check v$system_event for wait events

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width