DBAsupport.com Forums - Powered by vBulletin
Page 1 of 2 12 LastLast
Results 1 to 10 of 16

Thread: Keeping rows together in a partition

Hybrid View

  1. #1
    Join Date
    May 2000
    Location
    Alex, Egypt
    Posts
    41

    Keeping rows together in a partition

    I have the following situation:

    a Huge table of one Tera size, is partitioned on 50 partitions of 20GB each partition. This table contains customers transactions, the partition key is the date of transaction.
    most selects on the table are done by the cutomer_id which is indexed, and the transaction date which is the partition key)

    the miss ratio on this table is around 30% which is very high, and causing a bottleneck in disk access.

    I would like to increase the hit ratio in this table, by keeping rows who have the same customer_id in the same physical block.
    This was possible on Oracle 8.i with the use of clusters and partitioned views, but on Oracle 9, partitioned views are not supported, so we can not use the cluster.

    Do you think that using index organised table is a good approach for such huge table (please note that the record size is 800 bytes !!!

    is there some way of using 2 levels of partitioning, like one level on the date, and the other level hash partition on the customer_id !! (there is one million customers)
    Amir Magdy

  2. #2
    Join Date
    Dec 2002
    Location
    Bangalore ( India )
    Posts
    2,434

    Re: Keeping rows together in a partition

    Originally posted by amir_magdy
    I would like to increase the hit ratio in this table, by keeping rows who have the same customer_id in the same physical block.
    Consider ordering of data by cust id prior inserts.. well this ofcourse means batch loads..

    Do you think that using index organised table is a good approach for such huge table (please note that the record size is 800 bytes !!!
    Not really, infact it may worsen the things.

    is there some way of using 2 levels of partitioning, like one level on the date, and the other level hash partition on the customer_id !! (there is one million customers)
    Compoiste partitioning.. range-hash.


    Abhay.
    funky...

    "I Dont Want To Follow A Path, I would Rather Go Where There Is No Path And Leave A Trail."

    "Ego is the worst thing many have, try to overcome it & you will be the best, if not good, person on this earth"

  3. #3
    Join Date
    Aug 2002
    Location
    Colorado Springs
    Posts
    5,253
    I disagree on the composite partitioning -- maybe you are thinking of hash clusters, which would group the values.

    I think the best approach would be to take partitions into which no more inserts are being made (which I suppose there are since you are partitioning on date) and re-ordering them.

    Create a new non-partitioned table with the same columns, indexes, and foreign keys as the partitioned table, and insert into all records from a single partition ordering by customer_id (then optionally by date of transaction). Analyze it and perform a partition exchange with the original partition.

    If this data is completely static then you might also consider applying compression to the table. Data will only be compressed on direct path inserts, so regular inserts to the partitioned table are not affected, but when you insert into the non-partitioned table prior to the partition exchange you can use an insert append to get the data compressed.
    David Aldridge,
    "The Oracle Sponge"

    Senior Manager, Business Intelligence Development
    XM Satellite Radio
    Washington, DC

    Oracle ACE

  4. #4
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    Is the storage a SAN? RAID?

  5. #5
    Join Date
    May 2000
    Location
    Alex, Egypt
    Posts
    41
    Thanks for the advice,

    For the storage it's RAID.

    for inserting ordered by customer_id, it's not feasible, I will try composite partition, with range hash and check the performance
    Amir Magdy

  6. #6
    Join Date
    Aug 2002
    Location
    Colorado Springs
    Posts
    5,253
    Originally posted by amir_magdy
    for inserting ordered by customer_id, it's not feasible, I will try composite partition, with range hash and check the performance
    I don't think that's going to help you.
    David Aldridge,
    "The Oracle Sponge"

    Senior Manager, Business Intelligence Development
    XM Satellite Radio
    Washington, DC

    Oracle ACE

  7. #7
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    =====
    a Huge table of one Tera size, is partitioned on 50 partitions of 20GB each partition. This table contains customers transactions, the partition key is the date of transaction.
    most selects on the table are done by the cutomer_id which is indexed, and the transaction date which is the partition key)

    =====

    Isn't a good idea to have the customer id in the partition key since the rows are always selected by customer_id and transaction date?

    Example:
    PHP Code:
    CREATE TABLE sales
         
    cust_id  NUMBER,
           
    tran_date date ,
           
    other_columns varchar2(30))
       
    PARTITION BY RANGE tran_datecust_id)
       (
         
    PARTITION sales_p1 VALUES LESS THAN (to_date('01/01/1997','DD/MM/YYYY'), 10000)
          
    TABLESPACE users ,
         
    PARTITION sales_p2 VALUES LESS THAN (to_date('01/01/1997','DD/MM/YYYY'), 20000)
          
    TABLESPACE users,
         
    PARTITION sales_p3 VALUES LESS THAN (to_date('01/01/1998','DD/MM/YYYY'), 10000)
          
    TABLESPACE users ,
         
    PARTITION sales_p4 VALUES LESS THAN (to_date('01/01/1998','DD/MM/YYYY'), 20000)
          
    TABLESPACE users
       
    )
       
    ENABLE ROW MOVEMENT;

    Did you load the sorted data into the table as suggested by Slimdave
    Tamil

  8. #8
    Join Date
    Aug 2002
    Location
    Colorado Springs
    Posts
    5,253
    Originally posted by tamilselvan
    Isn't a good idea to have the customer id in the partition key since the rows are always selected by customer_id and transaction date?
    I wouldn't ... it adds complexity and the performance would only stand a chance of improvement if you were actually bulk joining from another table also partitioned on customer_id. It's an unnecessary complexity.
    David Aldridge,
    "The Oracle Sponge"

    Senior Manager, Business Intelligence Development
    XM Satellite Radio
    Washington, DC

    Oracle ACE

  9. #9
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    If it´s RAID how can you ensure that the rows are physically packed in same disk block even from Oracle´s view they are in same block?

  10. #10
    Join Date
    Aug 2002
    Location
    Colorado Springs
    Posts
    5,253
    They don't necessarily have to be -- the point is to get them clustered, so if on average you have 30 records per customer per partition you might fit them into one or two Oracle blocks instead of 30 Oracle blocks. Even if each Oracle block were split over 2 file system blocks you have still reduced the i/o's (on average) from the table by a factor of somewhere between 10 and 30.
    David Aldridge,
    "The Oracle Sponge"

    Senior Manager, Business Intelligence Development
    XM Satellite Radio
    Washington, DC

    Oracle ACE

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width