DBAsupport.com Forums - Powered by vBulletin
Results 1 to 2 of 2

Thread: Performance degrades with big blocksize tablespace

  1. #1
    Join Date
    May 2009
    Posts
    1

    Performance degrades with big blocksize tablespace

    Hi bangalibor here,

    I have a huge table which sizes near about 50GB and have five columns and no primary key exists.Because the performance is going down( it should as most of the queries are going for full table scan), we need to tune it.

    The considerations are:
    a. We will not go into partition table.(why?..dont ask plz)
    b. There is no joins in the query with other table so, that i can tune.Most of the queries are simple like
    select * from ... where a1=...;

    I have tried with index, but the cost of using index is high, so, oracle surpresses ......

    c. So, only way out for me , was to make table scan faster..
    That was my point of working.

    The test machine says, I took half of the data in a test machine, where i had two tables with different tablespace blocksize to compare.

    table 1:

    8k blocksize, db_buffer 150M , db_multiblock_read_count=16

    table 2:
    32K blocksize, db_32k_.. 500M , db_multiblock...=32

    I have achived 40% less cpu cost in 32k compare to 8k. But, the elapsed time was only 5% better in 32k.

    (my first doubt to you is, why this happens)

    i thought well , the best i can get is this 5%, and planned to go for implementation and i collapsed as,

    The query in 32k ,was taking three times more than the test machine...

    so, please help me ......................

    The production is of,

    db_32k_cache_size=3G compare to db_cache of 1.8G


    there are some more difference between the test and production...

    in test, pga_aggregate_target=8G
    shared_pool= 1.5G

    in prouction, pga_aggregate_target=2G
    shared_pool=1G
    One more thing, the cpu cost of 32k is still 40% less.

    So, i think i have yorked myself here... please help

  2. #2
    Join Date
    Mar 2007
    Location
    Ft. Lauderdale, FL
    Posts
    3,555
    I agree a 50 Gig in size table is a large one but I wouldn't say "huge".

    In short, I think poster is wasting fine tune effort in the wrong direction.

    Solution is pretty simple, fix the indexing strategy -there are just two reasons why Oracle wouldn't like the indexes: a) indexes have no o wrong statistics or, b) indexes are not in line with queries predicates rendering queries non-sargable.

    I also wouldn't take partitioning out of the table.

    Last but not least I wish query example "select * from ... where a1=...;" is not a real query -no "select *" query should be run against a production environment.
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width