DBAsupport.com Forums - Powered by vBulletin
Page 2 of 2 FirstFirst 12
Results 11 to 19 of 19

Thread: Speed if file is Compressed

  1. #11
    Join Date
    Nov 2002
    Location
    Geneva Switzerland
    Posts
    3,142

    Re: Re: Speed if file is Compressed

    Originally posted by abhaysk
    Which so ever might be the compression algorithm, its obvious that the decompression process would take its time as against retrival of data without compression.
    I think the point is that compressed data may occupy fewer blocks, so there could be less physical IO - the price paid will be more CPU time. For most db's a good trade-off.

    The doc talks about using it for DWH - bulk loads and no further changes - and only says no penalty for QUERIES. My reading is that updates & inserts will be penalised - due to increased chance of row migration?

  2. #12
    Join Date
    Dec 1999
    Location
    Purgatory
    Posts
    346
    Other disadvantage, is that once your data is compressed, there's no going back. You can't uncompress it.

  3. #13
    Join Date
    Dec 2002
    Location
    Bangalore ( India )
    Posts
    2,434

    Re: Re: Re: Speed if file is Compressed

    Originally posted by DaPi
    I think the point is that compressed data may occupy fewer blocks, so there could be less physical IO - the price paid will be more CPU time. For most db's a good trade-off.
    But you will have to pay off as your GETS will shoot up.. ( Gets of OS + Oracle ) and ofcourse as you said the CPU is most consuming thing while any kindof operation (Select/Update/Delete/Insert).

    Abhay.
    funky...

    "I Dont Want To Follow A Path, I would Rather Go Where There Is No Path And Leave A Trail."

    "Ego is the worst thing many have, try to overcome it & you will be the best, if not good, person on this earth"

  4. #14
    Join Date
    Aug 2002
    Location
    Colorado Springs
    Posts
    5,253

    Re: Re: Re: Speed if file is Compressed

    Originally posted by DaPi
    The doc talks about using it for DWH - bulk loads and no further changes - and only says no penalty for QUERIES. My reading is that updates & inserts will be penalised - due to increased chance of row migration?
    Possibly, yes, but more because of the increased time in uncompressing and recompressing the data block. You can still leave pctfree 30 and leave room for expansion of rows - but you have to consider whether the new/modified data are going to compress as well as the old, which can make it more tricky
    David Aldridge,
    "The Oracle Sponge"

    Senior Manager, Business Intelligence Development
    XM Satellite Radio
    Washington, DC

    Oracle ACE

  5. #15
    Join Date
    Aug 2002
    Location
    Colorado Springs
    Posts
    5,253
    Originally posted by Horace
    Other disadvantage, is that once your data is compressed, there's no going back. You can't uncompress it.
    Ah, not so ...
    Code:
    SQL> create table my_table compress pctfree 0
      2  as select * from dba_objects;
    
    Table created.
    
    SQL> analyze table my_table compute statistics;
    
    Table analyzed.
    
    SQL> select blocks, empty_blocks from user_tables
      2  where table_name = 'MY_TABLE';
    
        BLOCKS EMPTY_BLOCKS
    ---------- ------------
           193           62
    
    SQL> alter table my_table nocompress;
    
    Table altered.
    
    SQL> alter table my_table move nocompress;
    
    Table altered.
    
    SQL> analyze table my_table compute statistics;
    
    Table analyzed.
    
    SQL> select blocks, empty_blocks from user_tables
      2  where table_name = 'MY_TABLE';
    
        BLOCKS EMPTY_BLOCKS
    ---------- ------------
           600           39
    
    SQL>
    Good job too, cos in 9i you couldn't add columns to a compressed table. Or drop them.
    David Aldridge,
    "The Oracle Sponge"

    Senior Manager, Business Intelligence Development
    XM Satellite Radio
    Washington, DC

    Oracle ACE

  6. #16
    Join Date
    Oct 2002
    Posts
    807

    Re: Re: Re: Speed if file is Compressed

    Originally posted by DaPi
    I think the point is that compressed data may occupy fewer blocks, so there could be less physical IO - the price paid will be more CPU time. For most db's a good trade-off.

    The doc talks about using it for DWH - bulk loads and no further changes - and only says no penalty for QUERIES. My reading is that updates & inserts will be penalised - due to increased chance of row migration?
    Para 1, Para 2 - Both true.

    Para 2 - As a ROT, I prefer to solely compress read only data. That's just me.

  7. #17
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    Originally posted by waitecj
    Obvious perhaps, but not noticable on my test server.

    try with 100 concurrent sessions and not only you

  8. #18
    Join Date
    Jul 2002
    Location
    Northampton, England
    Posts
    612
    true.

  9. #19
    Join Date
    Dec 1999
    Location
    Purgatory
    Posts
    346
    Fair point ref uncompressing Dave, and well demonstrated. I think I meant that you couldn't alter the DDL once compressed, but apparently this was only in early versions of 9i.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width