-
Re: Re: Speed if file is Compressed
Originally posted by abhaysk
Which so ever might be the compression algorithm, its obvious that the decompression process would take its time as against retrival of data without compression.
I think the point is that compressed data may occupy fewer blocks, so there could be less physical IO - the price paid will be more CPU time. For most db's a good trade-off.
The doc talks about using it for DWH - bulk loads and no further changes - and only says no penalty for QUERIES. My reading is that updates & inserts will be penalised - due to increased chance of row migration?
-
Other disadvantage, is that once your data is compressed, there's no going back. You can't uncompress it.
-
Re: Re: Re: Speed if file is Compressed
Originally posted by DaPi
I think the point is that compressed data may occupy fewer blocks, so there could be less physical IO - the price paid will be more CPU time. For most db's a good trade-off.
But you will have to pay off as your GETS will shoot up.. ( Gets of OS + Oracle ) and ofcourse as you said the CPU is most consuming thing while any kindof operation (Select/Update/Delete/Insert).
Abhay.
funky...
"I Dont Want To Follow A Path, I would Rather Go Where There Is No Path And Leave A Trail."
"Ego is the worst thing many have, try to overcome it & you will be the best, if not good, person on this earth"
-
Re: Re: Re: Speed if file is Compressed
Originally posted by DaPi
The doc talks about using it for DWH - bulk loads and no further changes - and only says no penalty for QUERIES. My reading is that updates & inserts will be penalised - due to increased chance of row migration?
Possibly, yes, but more because of the increased time in uncompressing and recompressing the data block. You can still leave pctfree 30 and leave room for expansion of rows - but you have to consider whether the new/modified data are going to compress as well as the old, which can make it more tricky
-
Originally posted by Horace
Other disadvantage, is that once your data is compressed, there's no going back. You can't uncompress it.
Ah, not so ...
Code:
SQL> create table my_table compress pctfree 0
2 as select * from dba_objects;
Table created.
SQL> analyze table my_table compute statistics;
Table analyzed.
SQL> select blocks, empty_blocks from user_tables
2 where table_name = 'MY_TABLE';
BLOCKS EMPTY_BLOCKS
---------- ------------
193 62
SQL> alter table my_table nocompress;
Table altered.
SQL> alter table my_table move nocompress;
Table altered.
SQL> analyze table my_table compute statistics;
Table analyzed.
SQL> select blocks, empty_blocks from user_tables
2 where table_name = 'MY_TABLE';
BLOCKS EMPTY_BLOCKS
---------- ------------
600 39
SQL>
Good job too, cos in 9i you couldn't add columns to a compressed table. Or drop them.
-
Re: Re: Re: Speed if file is Compressed
Originally posted by DaPi
I think the point is that compressed data may occupy fewer blocks, so there could be less physical IO - the price paid will be more CPU time. For most db's a good trade-off.
The doc talks about using it for DWH - bulk loads and no further changes - and only says no penalty for QUERIES. My reading is that updates & inserts will be penalised - due to increased chance of row migration?
Para 1, Para 2 - Both true.
Para 2 - As a ROT, I prefer to solely compress read only data. That's just me.
-
Originally posted by waitecj
Obvious perhaps, but not noticable on my test server.
try with 100 concurrent sessions and not only you
-
-
Fair point ref uncompressing Dave, and well demonstrated. I think I meant that you couldn't alter the DDL once compressed, but apparently this was only in early versions of 9i.
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|