DBAsupport.com Forums - Powered by vBulletin
Page 2 of 2 FirstFirst 12
Results 11 to 20 of 20

Thread: Rollback Segments

  1. #11
    Join Date
    Nov 2002
    Posts
    80
    You select 1 or 10 or 150 columns - the execution time will be (roughly) the same.
    This interests me. I've used DB2 & this isn't the case with DB2 - it uses a different execution path over (I can't remember the exact figure 22? maybe) 16 columns. That was after raising a call with IBM.

    I've tried to find out what how Oracle works, do you have any links?

    TIA

  2. #12
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    The implementation of 1000 columns in a table needs to be studied carefully. Oracle cannot read 1000 columns data in a single operation, even if it needs only one data block to store for one row.

    Oracle needs 4 I/O operations on a table that has 1000 columns.
    It reads first 255 columns, then next 255 columns, then 255 columns and finally 235 columns. So the read performance is going to be hit badly when the table has 1000 columns.

    Tamil

  3. #13
    Join Date
    Feb 2000
    Location
    Singapore
    Posts
    1,758
    Originally posted by tamilselvan
    The implementation of 1000 columns in a table needs to be studied carefully. Oracle cannot read 1000 columns data in a single operation, even if it needs only one data block to store for one row.

    Oracle needs 4 I/O operations on a table that has 1000 columns.
    It reads first 255 columns, then next 255 columns, then 255 columns and finally 235 columns. So the read performance is going to be hit badly when the table has 1000 columns.

    Tamil
    Hi tamilselvan,
    Can you elaborate on this? Or any links to support this statement?
    Sanjay G.
    Oracle Certified Professional 8i, 9i.

    "The degree of normality in a database is inversely proportional to that of its DBA"

  4. #14
    Join Date
    Dec 2002
    Location
    Bangalore ( India )
    Posts
    2,434
    First of all, 1 row of 1000 column cannot fit into 1 block of memory unit as Tamil pointed, provided all the Columns have data.

    Range of BLOCK_SIZE is 2K to 32K. Assume Maximum(32K)...

    PS below demo with block size = 8K.

    Code:
    TEST_ABHAY> set trimspool on
    TEST_ABHAY> set autotrace off
    TEST_ABHAY> ed
    Wrote file afiedt.buf
    
      1* create table thousand_col_tab (id1 number(1)) storage (initial 8K next 8K pctincrease 0 minextents 1 maxextents unlimited)
    TEST_ABHAY> /
    
    Table created.
    
    Elapsed: 00:00:01.00
    TEST_ABHAY> ed
    Wrote file afiedt.buf
    
      1  begin
      2  for i in 2..1000 loop
      3  execute immediate ('alter table thousand_col_tab add id'||i||' number(1)');
      4  end loop;
      5* end;
    TEST_ABHAY> /
    
    PL/SQL procedure successfully completed.
    
    Elapsed: 00:00:51.02
    TEST_ABHAY> analyze table thousand_col_tab compute statistics;
    
    Table analyzed.
    
    Elapsed: 00:00:02.08
    TEST_ABHAY> select *  from dba_tables where table_name=upper('thousand_col_tab');
    
    OWNER                          TABLE_NAME                     TABLESPACE_NAME                INITIAL_EXTENT          NEXT_EXTENT NUM_ROWS               BLOCKS         EMPTY_BLOCKS 
    ------------------------------ ------------------------------ ------------------------------ -------------- -------------------- -------- -------------------- -------------------- 
    ABHAY                          THOUSAND_COL_TAB               USERS_TS_DICT_MANAGED                   16384                 8192        0                    0                    1 
    
    Elapsed: 00:00:02.06
    TEST_ABHAY> 
    TEST_ABHAY> Insert into thousand_col_tab(id1) values(1);
    
    1 row created.
    
    Elapsed: 00:00:01.01
    TEST_ABHAY> begin
      2  for i in 2..1000 loop
      3  execute immediate ('update thousand_col_tab set id'||i||'=id1');
      4  end loop;
      5  end;
      6  /
    
    PL/SQL procedure successfully completed.
    
    Elapsed: 00:01:05.01
    TEST_ABHAY> commit;
    
    Commit complete.
    
    Elapsed: 00:00:00.00
    TEST_ABHAY> analyze table thousand_col_tab compute statistics;
    
    Table analyzed.
    
    Elapsed: 00:00:01.02
    TEST_ABHAY> select *  from dba_tables where table_name=upper('thousand_col_tab');
    
    OWNER                          TABLE_NAME                     TABLESPACE_NAME                INITIAL_EXTENT          NEXT_EXTENT NUM_ROWS               BLOCKS         EMPTY_BLOCKS 
    ------------------------------ ------------------------------ ------------------------------ -------------- -------------------- -------- -------------------- -------------------- 
    ABHAY                          THOUSAND_COL_TAB               USERS_TS_DICT_MANAGED                   16384                 8192        1                  746                    0 
    
    Elapsed: 00:00:00.02
    TEST_ABHAY> 
    TEST_ABHAY> select (blocks)*8192/1024 "Size of tab in KB" from dba_tables where table_name=upper('thousand_col_tab');
    
       Size of tab in KB
    --------------------
                    5968
    
    Elapsed: 00:00:00.05
    TEST_ABHAY> select count(*) from dba_extents where segment_name=upper('thousand_col_tab');
    
                COUNT(*)
    --------------------
                     747
    
    Elapsed: 00:00:04.07
    
    TEST:ABHAY> select count(1) from thousand_col_tab;
    
                COUNT(1)
    --------------------
                       1
    
    TEST_ABHAY> spool off
    I have used the datatype Number(1) which uses least memory on hardisk.
    So to have one row with all columns filled with data...i see there are 746 datablocks ( 5968 KB )....
    so if we take the data to range in 5000 KB for eack row on harddisk..
    & assuming we have max value for the db_block_size(32KB)...we will still have around 150+ data_blocks for one row.
    Assuming we have DB_FILE_MULTIBLOCK_READ_COUNT to be max (32)..

    So to fetech one row into DB_CACHE from harddisk, the number of I/O will be 150/32=4.6...
    I think this is the reason why tamil was mentioning 4 I/O min.

    Rgds
    Abhay.
    funky...

    "I Dont Want To Follow A Path, I would Rather Go Where There Is No Path And Leave A Trail."

    "Ego is the worst thing many have, try to overcome it & you will be the best, if not good, person on this earth"

  5. #15
    Join Date
    Dec 2000
    Location
    Ljubljana, Slovenia
    Posts
    4,439
    Abhay,

    Once again you are driving (wrong) conclusions from bad/inappropriate tests.... Let's see:
    Originally posted by abhaysk
    I have used the datatype Number(1) which uses least memory on hardisk.
    Wrong. Not that it realy matter that much for the sake of the results you've gotten from your tests, I just like to mention that datatype NUMBER(1) by no means uses least memory on harddisk. Value 0 will indeed use only 1 byte of oracle datafile, but any other number that can fit into this datatype will use 2 or 3 bytes! While on the other hand, any non-null value stored in CHAR(1) or VARCHAR(1) column will require only 1 byte of disk storage. So generaly data stored in CHAR(1) consume less space than that stored in NUMBER(1).

    Now about your test. I did rerun them and have gotten the same results. However that doesn't mean that you can realy drive any conclusions from them. Why? Because you can easily perform some other tests that perform virtually the same actions on that 1000-column row, yet they result in totaly different storage consmuption for that same row.

    You have inserted your row into a table with only the first column having some value, all other collumns were null. Then you performed 999 updates to that row to fill all other columns with the same value. This realy results in quite excessive space consumption (and I'm not quite sure how to explain this).

    But what if you do it the other way? What if you simply perform a single insert with all columns populated values?
    Code:
    DECLARE 
      v_cmd VARCHAR2(32000) := 'insert into thousand_col_tab values(1'; 
    BEGIN 
      FOR i IN 2 .. 1000 LOOP 
        v_cmd := v_cmd || ',1'; 
      END LOOP; 
      v_cmd := v_cmd || ')'; 
      EXECUTE IMMEDIATE (v_cmd); 
    END;
    /
    Your query against table statistics will show that this row consumes only one single block! With the same data as yours, which consumed about 750 blocks!

    But it can get even stranger! Let's do it the same way you did (with one insert and 999 updates), but this time so that the initial insert will put data into the last column instead into the first one.
    Code:
    INSERT INTO thousand_col_tab (id1000) VALUES (1);
    
    BEGIN 
      FOR i IN 1..999 LOOP 
      EXECUTE IMMEDIATE ('update thousand_col_tab set id'||i||'=id1000'); 
      END LOOP; 
    END;
    /
    Again, your query against DBA_TABLES will show that this column was stored in a single block, not into 750 blocks as in your case. And all that was different was the column into which the inserted value was stored!

    I have used the datatype Number(1) which uses least memory on hardisk.
    So to have one row with all columns filled with data...i see there are 746 datablocks ( 5968 KB )....
    so if we take the data to range in 5000 KB for eack row on harddisk..
    & assuming we have max value for the db_block_size(32KB)...we will still have around 150+ data_blocks for one row.
    Assuming we have DB_FILE_MULTIBLOCK_READ_COUNT to be max (32)..

    So to fetech one row into DB_CACHE from harddisk, the number of I/O will be 150/32=4.6...
    I think this is the reason why tamil was mentioning 4 I/O min.
    I hope that my two examples have shown that the above conclusions of yours are wrong because your test is far from being the representative one.

    Now the fact which tamilsevan has pointed out is true and the correct one, but not for the reasons you have assumed. Even if the 1000-column row is stored in a single block, oracle will still need to perform 4 I/O operations to get it, using CONTINUED ROW FETCH mechanism as if the row was chained into 4 sepparate blocks! This anomaly was first pointed out by Dave Ensor, and Jonathan Lewis is describing this phenomenon in his book "Practical Oracle 8i" on pages 74 and 75.
    Jurij Modic
    ASCII a stupid question, get a stupid ANSI
    24 hours in a day .... 24 beer in a case .... coincidence?

  6. #16
    Join Date
    Dec 2002
    Location
    Bangalore ( India )
    Posts
    2,434
    Jurij :

    Originally posted by jmodic
    You have inserted your row into a table with only the first column having some value, all other collumns were null. Then you performed 999 updates to that row to fill all other columns with the same value. This realy results in quite excessive space consumption (and I'm not quite sure how to explain this).
    All I thought was Update would be expensive than a single insert, but to this extent - I never new.
    Instead of one block thru single insert and ending up with 700+ blocks thru Updates is unimaginable...
    How on earth have Oracle designed for updates..which will lead to such exagareted figures..

    Can you please get me some info / links / books to have detailed explanation?


    Originally posted by jmodic
    But it can get even stranger! Let's do it the same way you did (with one insert and 999 updates), but this time so that the initial insert will put data into the last column instead into the first one.
    Code:
    INSERT INTO thousand_col_tab (id1000) VALUES (1);
    
    BEGIN 
      FOR i IN 1..999 LOOP 
      EXECUTE IMMEDIATE ('update thousand_col_tab set id'||i||'=id1000'); 
      END LOOP; 
    END;
    /
    Again, your query against DBA_TABLES will show that this column was stored in a single block, not into 750 blocks as in your case. And all that was different was the column into which the inserted value was stored!
    Again, How on earth have Oracle designed for updates..which will lead to just one Block if Last Col is inserted & rest
    updated.


    Can we draw some conclusions ( generic ) here like

    * Inserts are always better then updates & try avoiding updates as much as possible - Expected.

    * If have to go for only Updates ( If no go in any application ), then is it recomended to DELETE the recs & then
    Insert from source..provided number if records for processing is less say less than a million?

    * If updates unavoidable then can we design our application in such a way that insertable Cols ( initially ) be last?


    Regards
    Abhay.
    funky...

    "I Dont Want To Follow A Path, I would Rather Go Where There Is No Path And Leave A Trail."

    "Ego is the worst thing many have, try to overcome it & you will be the best, if not good, person on this earth"

  7. #17
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    well set high pctfree and you wont end up using 750 blocks

  8. #18
    Join Date
    Dec 2002
    Location
    Bangalore ( India )
    Posts
    2,434
    Originally posted by pando
    well set high pctfree and you wont end up using 750 blocks
    Even after i set Pctfree=50, i ended up using 740+ blocks.
    funky...

    "I Dont Want To Follow A Path, I would Rather Go Where There Is No Path And Leave A Trail."

    "Ego is the worst thing many have, try to overcome it & you will be the best, if not good, person on this earth"

  9. #19
    Join Date
    Dec 2000
    Location
    Ljubljana, Slovenia
    Posts
    4,439
    Originally posted by abhaysk
    All I thought was Update would be expensive than a single insert, but to this extent - I never new.
    Instead of one block thru single insert and ending up with 700+ blocks thru Updates is unimaginable...
    How on earth have Oracle designed for updates..which will lead to such exagareted figures..
    Isn't it more than evident that this kind of anomaly is happening only on tables with more than 255 columns. You won't be able to observe such behavior on "normal" table with not such large number of columns....

    Again, How on earth have Oracle designed for updates..which will lead to just one Block if Last Col is inserted & rest
    updated.
    As I said, this is an exceptional behavior on tables with large number of columns, it's not a general behavior. About what is different if non-null value is inserted in the last column instead of in the first column: read in Concepts manual how null values are stored in tables when there is any non-null value in columns following it or when there is no non-null value in any trailing columns.

    Can we draw some conclusions ( generic ) here like

    * Inserts are always better then updates & try avoiding updates as much as possible - Expected.
    This would be very ridiculous conclusion. The general rule should be: use whatever DML (insert/update/delete) is appropriate and efficient for the task. Use INSERT when you need new record, use UPDATE when you must change some values of the existing record. Noone in his right mind will create a record with INSERT listing only some of the columns, followed by a bunch of updates on the same record to populate other columns - if those other column values were known in the insert time, why not use them then? And it's not because of this strange behavior you've encountered with 1000-column table, it's because of "why would you want to do it with N I/O operations when you can do it with single I/O".

    * If have to go for only Updates ( If no go in any application ), then is it recomended to DELETE the recs & then
    Insert from source..provided number if records for processing is less say less than a million?
    I'm not quitre sure what you mean here, it sounds very confusing to me.

    * If updates unavoidable then can we design our application in such a way that insertable Cols ( initially ) be last?
    Only if you have table with 1000 columns......
    Jurij Modic
    ASCII a stupid question, get a stupid ANSI
    24 hours in a day .... 24 beer in a case .... coincidence?

  10. #20
    Join Date
    Dec 2002
    Location
    Bangalore ( India )
    Posts
    2,434
    Originally posted by jmodic
    This would be very ridiculous conclusion. The general rule should be: use whatever DML (insert/update/delete) is appropriate and efficient for the task. Use INSERT when you need new record, use UPDATE when you must change some values of the existing record. Noone in his right mind will create a record with INSERT listing only some of the columns, followed by a bunch of updates on the same record to populate other columns - if those other column values were known in the insert time, why not use them then? And it's not because of this strange behavior you've encountered with 1000-column table, it's because of "why would you want to do it with N I/O operations when you can do it with single I/O".
    Jurij :

    Well, it seems ridiculous, but I meant this for 2 reasons.

    * When one can insert all values in one go, then why insert some cols at time X and then update rest at time Y....as you mentioned.
    I have seen many of the posts here where PPL try to insert some values and rest update, even when it can be done thru single
    Inserts thru sub queries or so.

    * I belive Updates will be costlier than Inserts, due to 3 reasons
    1) An extra I/O required if Data Block where the row is candidate for Update dosent exist in Data Buffer Cache.
    2) Updates candidate for Row Chaining/Row Migration.
    3) Response time for Updates will usually be more/higher than Inserts..

    So, in general, I said Insert score better marks over Updates...
    If you see application wise, then use Any DML which will be efficent to achive Business.


    I'm not quitre sure what you mean here, it sounds very confusing to me.
    Say, you have N records in source table which needs to be updated in Destination table, by joining PK cols.
    Then is it better to DELETE recs in Destination Table the records candidate for Update and then DO bulk Insert.
    Assume no FKs

    Abhay.
    funky...

    "I Dont Want To Follow A Path, I would Rather Go Where There Is No Path And Leave A Trail."

    "Ego is the worst thing many have, try to overcome it & you will be the best, if not good, person on this earth"

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width