DBAsupport.com Forums - Powered by vBulletin
Results 1 to 8 of 8

Thread: Non-partition table to partition

  1. #1
    Join Date
    Mar 2006
    Location
    Charlotte, NC
    Posts
    865

    Non-partition table to partition

    Hi,

    Could you please share your views to convert non-partitioned table to partitioned which is 600GB size?

    Option 1: using data pump
    Option 2: pre create partition table and do insert with append hint
    Option 3: use dbms_redefinition package

    As the table size is huge, I prefer option 1.

    Thanks in advance.

    Thanks,
    Vijay Tummala

    Try hard to get what you like OR you will be forced to like what you get.

  2. #2
    Join Date
    Mar 2007
    Location
    Ft. Lauderdale, FL
    Posts
    3,555
    Quote Originally Posted by vnktummala View Post
    Could you please share your views to convert non-partitioned table to partitioned which is 600GB size?

    Option 1: using data pump
    Option 2: pre create partition table and do insert with append hint
    Option 3: use dbms_redefinition package
    We recently used option #2 for a two billion rows table; we chose #2 because less physical I/O than option #1 and also for allowing more "control" over the whole process.

    1- pre create partitioned table -in our case date range per YYYYMM, one tablespace per "year"
    2- start multiple insert streams -in our case one job per year so each job would take care of loading twelve YYYYMM partitions using append hint.
    3- build indexes -in our case all of them locally partitioned.
    4- gather fresh performance stats.

    It worked like a charm.
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.

  3. #3
    Join Date
    Mar 2006
    Location
    Charlotte, NC
    Posts
    865
    Thanks for your input Paul.

    My case also similar. I have 1.6 billion rows in that table. New partition key also same as your partition key format. But I don't have any Index on the existing table.

    choosing option#2 is costlier for me because every insert stream I start has to do FTS on the existing table. So, I still believe option#1 is feasible for me.

    Appreciate your input.

    Thanks,
    Vijay Tummala

    Try hard to get what you like OR you will be forced to like what you get.

  4. #4
    Join Date
    Mar 2007
    Location
    Ft. Lauderdale, FL
    Posts
    3,555
    Quote Originally Posted by vnktummala View Post
    choosing option#2 is costlier for me because every insert stream I start has to do FTS on the existing table.
    I do agree all streams will be doing a FTS on source table but since the idea is to run all of them at the same time only one of them will be doing actual physical reads, all others will be getting already cached blocks
    Last edited by PAVB; 05-12-2011 at 01:39 PM. Reason: typo
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.

  5. #5
    Join Date
    May 2002
    Posts
    2,645
    Quote Originally Posted by vnktummala View Post
    Thanks for your input Paul.

    My case also similar. I have 1.6 billion rows in that table. New partition key also same as your partition key format. But I don't have any Index on the existing table.

    choosing option#2 is costlier for me because every insert stream I start has to do FTS on the existing table. So, I still believe option#1 is feasible for me.

    Appreciate your input.

    Thanks,
    insert into new_partitioned_table select * from old_table;

    Why can't you parallelize that operation (or a CTAS statement)?

  6. #6
    Join Date
    Mar 2006
    Location
    Charlotte, NC
    Posts
    865
    Thanks Paul. I have done this using data pump. export took 2 hours for 1.6 billion rows and import took 5 hours. However, I do have another very huge table with 9 billion rows for which I need to change the partition column. I will go with your suggestion for that table.

    Thanks Stecal.

    insert into new_partitioned_table select * from old_table;
    It's a big table with 1.6 billion rows. So I don't want to take a chance to insert every thing with out any commit. Rather I can use insert with append and start multiple insert streams as Paul suggested.

    I don't think so CTAS is applicable here because existing table is non-partitioned and new table is partitioned.

    Thanks,
    Vijay Tummala

    Try hard to get what you like OR you will be forced to like what you get.

  7. #7
    Join Date
    May 2002
    Posts
    2,645
    Nothing is stopping you from CTAS with partitions.

  8. #8
    Join Date
    Mar 2006
    Location
    Charlotte, NC
    Posts
    865
    Thanks Stecal, I will test this option too.

    Thanks,
    Vijay Tummala

    Try hard to get what you like OR you will be forced to like what you get.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width