Hash Partitioning
DBAsupport.com Forums - Powered by vBulletin
Results 1 to 6 of 6

Thread: Hash Partitioning

  1. #1
    Join Date
    Mar 2004
    Location
    India
    Posts
    72

    Thumbs up Hash Partitioning

    Hi,

    We are planning to use HASH partition for one of the large volume table. (There is no date field and planning to do HASH based on the ID)

    The expected inflow for that table is 16000000 records and retention of data is 7 years. So the end of the 7th year the record count would be 112000000.

    Questions:
    1. Planning to have 256 partitions, but it looks too high as well. Any suggestion?
    2. If I make 128 for now and then if I add the partition later, is oracle will distribute the records properly or initial partition will be overloaded?

    Thanks in advance for your help

  2. #2
    Join Date
    Mar 2007
    Location
    Ft. Lauderdale, FL
    Posts
    3,554
    Any particular reason to do hash partitioning?

    What's the specific problem expected to be solved by this partitioning strategy?

    - Hash partitioning doesn't help during archiving/purging.
    - Hash partitioning doesn't help queries doing scattered read.

    Suggested number of partitions will create partitions around 450K rows each, a little too small on my personal view. Depending on application needs and partitioning strategy we take advantage of much larger partitions, from 5M rows up.

    In general I favor range-partitioning for large tables in reporting/dwh environments.

    Having said that I have implemented with great success hash partitioning on high volatility tables part of oltp systems - tables that are hit by a very high rate of insert/deletes, kind of virtually replacing the content of the whole table in a few hours.
    Last edited by PAVB; 07-02-2010 at 08:57 AM. Reason: typo
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.

  3. #3
    Join Date
    Mar 2004
    Location
    India
    Posts
    72
    Thanks PAVB.

    The reason to choose the HASH is that the id (Primary key) is scattered and its not sequential. So, to have a even distribution, we have decided to use HASH partition.

    Also we are planning to use Data compression option. Hope that will reduce the IO and performance will be improved. Update to the particular table will be very minimal.

  4. #4
    Join Date
    Mar 2004
    Location
    India
    Posts
    72
    What is the best approach to add a hash partition to non partition table?

    So for I could see,
    1. Create new table and do export/import, drop the original and rename the new table and create constraints if any
    2. Create new table and do insert, drop the original and rename the new table and create constraints if any

    The current data volume in production will be big I believe. Thanks in advance.

  5. #5
    Join Date
    Mar 2007
    Location
    Ft. Lauderdale, FL
    Posts
    3,554
    How much data is there?

    A straigh forward solution would be...
    1- Rename current table as table_OLD
    2- CTAS new partitioned table as select * from table_OLD with a handsome degree of parallelism.
    3- Rebuild indexes
    4- Be sure RI is as expected
    5- Gather fresh stats
    Once you are confident everything went as planned you can drop table_OLD
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.

  6. #6
    Join Date
    Mar 2004
    Location
    India
    Posts
    72
    Thanks a lot PAVB.

    Since it is an initial phase, the data volume is less and I followed the solution you provided. Thanks

    Do you think data compression will improve further performance in 10gr2 database with Hash partition? Data will be loaded to these tables by each insert and not through bulk. Since compression option will work for bulk load (with append hint or sqlloader), I believe that even if we enable compression option, it will not give any effect for my requirement.

    Please let me know if you have different thoughts.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  



Click Here to Expand Forum to Full Width