DBAsupport.com Forums - Powered by vBulletin
Results 1 to 10 of 10

Thread: Move cluster to a new tablespace - purging issue

  1. #1
    Join Date
    Dec 2001
    Posts
    203

    Move cluster to a new tablespace - purging issue

    Hi Guys,

    I have a cluster table. Which is growing unreasonably large. Even when I am deleting a significant number of records from the table, it is not freeing up space from the datafile. Only solution is Copying data to a temp table and truncate the same. But this is taking a hell lot of time.

    So, I tried to move my CLUSTER table to a new tablespace. But oracle does not allow with ALTER TABLE move ..... command.

    Can you suggest me any other way to do the same!!!
    sumit

  2. #2
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    Avoid using CLUSTER.

    No BIG database use oracle CLUSTER in the world.

    Tamil

  3. #3
    Join Date
    Dec 2001
    Posts
    203

    Smile

    Good one.

    But I am helpless. I can not change my Application DB over night. So, please help me.

    Ok, can you please suggect me the best PURGING procedure for a very large table, say 560 GB tablespace size.


    Regards
    sumit

  4. #4
    Join Date
    Aug 2002
    Location
    Colorado Springs
    Posts
    5,253
    when you copy the data to a temp table, are you using "create table ... nologging as select ..."?
    David Aldridge,
    "The Oracle Sponge"

    Senior Manager, Business Intelligence Development
    XM Satellite Radio
    Washington, DC

    Oracle ACE

  5. #5
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    Do you use INDEXED CLUISTER or HASH CLUSTER?

    I think, the SIZE parameter value is NOT set correctly. This may cause unexpected space growth in the cluster.

    How many tables are in the cluster?

    What is your main objective?
    1. Arrest space growth ?
    2. Purge old data
    3. Reclaim deleted space

    Do you have any idea about CLUSTER KEY?

    Tamil

  6. #6
    Join Date
    Dec 2001
    Posts
    203
    Hi Tamilselvan

    1. I am using a Index Cluster
    2. Yes, might be you are right. The problem might be with the size. I am pasting the storage here. Pls advise me. The current size of the table is around half tera bytes. And perday inserts is around 4-5 GB.
    (Pct_free: 1
    pct_used: 98
    initial extent: 1M
    Next Extent: 1M
    Min Extents: 1
    Max extents 2147483645
    Pct_increase 0
    freelists 97
    )

    3. Only one table in the cluster.

    Most Imp
    4. I can not reclaim the deleted space!! (I tried Alter cluster deallocate unused!! it is also not working!!)

    It is a production problem and hence anxiously looking forward to your reply.
    sumit

  7. #7
    Join Date
    Dec 2001
    Posts
    203
    Hi,

    New update !!

    I tried to deallocate unused space in cluster.

    What I did (Staging server)
    -----------

    1. I take down the total tablespace size and actual segment size in that tablespace. (That tablespace has only one segment,i.e cluster)

    2. The total # of records in that cluster table was 16 million.

    3. The acutual size of the segment was 18GB

    4. I deleted half of that records (almost 9 million)

    5. Query the actual size of the segment again. (which was same)

    6. Alter cluster <> deallocate unused; was my next command.

    7. Inserted again the same # of records (9 million) in that same
    segment.

    8. Query tthe actual size of the segment again. And this time it was
    only 2MB more.


    My questions
    ------------

    1. Am I in the right track?

    2. Why it is 2 MB more, when I am inserting the same # of records?
    Any overhead?

    3. Or is there any other way to verify the same (reusing the space).



    Thanks and Regards
    sumit

  8. #8
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    FREELISTS 97....You must be kidding.

    First of all I would not use cluster for a single table, even though it is permitted (In the cluster definition see single table clause).

    2nd, cluster is not suitable for transaction type tables in which you keep on add rows daily. It is best suited for static tables.

    It is hard to determine the SIZE value in the cluster unless you know the total number of rows.

    All you have to do is: create a normal table from the cluster. Your application will not fail b/c it does not know whether you use cluster or just a normal table.

    If you want still to use cluster, then create a temp table in a sorted order of the cluster key, then re-insert them into cluster agian with suitable SIZE vlaue.

    Pl post the cluster definition.

    Tamil

  9. #9
    Join Date
    Dec 2001
    Posts
    203
    FREELISTS 97....You must be kidding.
    No, it is!! Is it good or has negative impact? If negative why?

    It is hard to determine the SIZE value in the cluster unless you know the total number of rows.
    Approx 2900 Million records - which is allmost 560GB actual segment size.

    If you want still to use cluster, then create a temp table in a sorted order of the cluster key, then re-insert them into cluster agian with suitable SIZE vlaue.
    presently, there is a back log of old data. First I would like to purge those. The issue is
    1. Export is hanging!! (Any solution)
    2. Trunacte is taking unreasonably long time. (Any suggestion to make it fast)
    3. Doubt of fragmentation too.

    I will post you the cluster definition soon.

    Regards
    sumit

  10. #10
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    One time you said 16M rows now you say 2600M rows.
    Which is true?

    By setting high freelists, you are wasting space that hurts perf.

    For a 560 GB table, you need set to proper initial and next extent sizes (400MB is a good starting point).

    Tamil

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width