DBAsupport.com Forums - Powered by vBulletin
Page 1 of 2 12 LastLast
Results 1 to 10 of 11

Thread: INITITIAL AND NEXT EXTENTS

  1. #1
    Join Date
    Jan 2002
    Posts
    474
    Hi all,

    I have this question b/c I am bit confused. Basically, I understand about the initial and next extents; however, I am a little bit confused about the size table.

    How can we determine the size of tables so we can size the inititial and next extent on the tablespaces??? From we can look at on the table side for us to determine the initital and next extent on tablespace.

    thanks

  2. #2
    Join Date
    Nov 2000
    Location
    greenwich.ct.us
    Posts
    9,092
    1. Put some sample data in the table
    2. analyze the table
    3. look at the average row size
    4. multiply * number of expected rows

    Code:
    SQL> create table xyz (x number(10), y varchar2(20), z date);
    
    Table created.
    SQL> declare
      2     i number;
      3  begin
      4     i:= 0;
      5     while ( i < 1000 ) loop
      6        i:= i+1;
      7        insert into xyz values (i, to_char(i), sysdate);
      8     end loop;
      9  end;
     10  /
    
    PL/SQL procedure successfully completed.
    
    SQL> commit;
    
    Commit complete.
    SQL> analyze table xyz compute statistics;
    
    Table analyzed.
    SQL> select avg_row_len from user_tables where table_name = 'XYZ';
    
    AVG_ROW_LEN
    -----------
             19
    If you expect this row to have 1,000,000 records, it would be approximate 19,000,000 bytes.
    Jeff Hunter

  3. #3
    Join Date
    Feb 2000
    Location
    Washington DC
    Posts
    1,843
    Now you have the number of rows from Jeff response, forcast the future growth of the table in terms of number of records and see how much space you need for that table for next 6 months or year. You have total number of bytes required for the table, have the best extent size that fits the table needs.
    Reddy,Sam

  4. #4
    Join Date
    Jan 2002
    Posts
    474
    Originally posted by marist89
    1. Put some sample data in the table
    2. analyze the table
    3. look at the average row size
    4. multiply * number of expected rows

    Code:
    SQL> create table xyz (x number(10), y varchar2(20), z date);
    
    Table created.
    SQL> declare
      2     i number;
      3  begin
      4     i:= 0;
      5     while ( i < 1000 ) loop
      6        i:= i+1;
      7        insert into xyz values (i, to_char(i), sysdate);
      8     end loop;
      9  end;
     10  /
    
    PL/SQL procedure successfully completed.
    
    SQL> commit;
    
    Commit complete.
    SQL> analyze table xyz compute statistics;
    
    Table analyzed.
    SQL> select avg_row_len from user_tables where table_name = 'XYZ';
    
    AVG_ROW_LEN
    -----------
             19
    If you expect this row to have 1,000,000 records, it would be approximate 19,000,000 bytes.
    Jeff,

    thanks for the response.

    I have two questions:
    1. how does Oracle calculate on the AVG_ROW_LEN, based on what???

    2. based on your example, you mentioned that let's say AVG_ROW_LEN=19, what do you mean by "If I expect this row to have 1,000,000 record", you mean 1,000,000 rows to to be inserted into this table????

    Thanks

  5. #5
    Join Date
    Nov 2000
    Location
    greenwich.ct.us
    Posts
    9,092
    Originally posted by ashley75

    Jeff,

    thanks for the response.

    I have two questions:
    1. how does Oracle calculate on the AVG_ROW_LEN, based on what???

    2. based on your example, you mentioned that let's say AVG_ROW_LEN=19, what do you mean by "If I expect this row to have 1,000,000 record", you mean 1,000,000 rows to to be inserted into this table????

    Thanks
    1. When you generate statistics, oracle calculates the size of a row's actual data instead of it's potential data (size of columns). How it does it, I don't know.

    2. Yes, I meant if the table will have 1,000,000 rows, your size would be approximately 19,000,000 bytes.
    Jeff Hunter

  6. #6
    Join Date
    Jan 2002
    Posts
    474
    thanks for your reponse and sorry to be annoyed.

    let's say we can estimate how big the table can grow in a certain period, but the thing is we have to create the tablespace now so the table can use the tablespace to grow. the question is how do you determine to size the INTITIAL and NEXT EXTENT efficient for our database. Based on what???

    Thanks Jeff




  7. #7
    Join Date
    Nov 2000
    Location
    greenwich.ct.us
    Posts
    9,092
    You don't want the extents to be > 1000 for performance reasons (either LMT or DMT). Pick a reasonable number of extents to start out with, say one per data file.

    If you initially have a table that is 4M that you know will eventually be 20M, start out with INITIAL 1M NEXT 1M PCTINCREASE 0. You shouldn't grow beyond 20 extents with your data, but if you do, it's no big deal.

    See http://otn.oracle.com/docs/products/...block.htm#5476 for details.
    Jeff Hunter

  8. #8
    Join Date
    Aug 2002
    Location
    Bangalore, India
    Posts
    405
    Originally posted by marist89
    You don't want the extents to be > 1000 for performance reasons (either LMT or DMT). Pick a reasonable number of extents to start out with, say one per data file.

    If you initially have a table that is 4M that you know will eventually be 20M, start out with INITIAL 1M NEXT 1M PCTINCREASE 0. You shouldn't grow beyond 20 extents with your data, but if you do, it's no big deal.
    It's no big deal to have large no. of extents as long as your queries use indexes. It DOES matter when table goes for FTS (full table scan). When a query goes for FTS, Oracle has to check for next block which may not be continous in the case of many smaller sized extents, but there are chances of finding the continous blocks in the case of few big sized extents.

    -nagarjuna

  9. #9
    Join Date
    Sep 2001
    Location
    NJ, USA
    Posts
    1,287
    [QUOTE]Originally posted by nagarjuna
    It's no big deal to have large no. of extents as long as your queries use indexes. It DOES matter when table goes for FTS (full table scan). When a query goes for FTS, Oracle has to check for next block which may not be continous in the case of many smaller sized extents, but there are chances of finding the continous blocks in the case of few big sized extents.
    -nagarjuna
    About FTS and # of extents:
    Someboby remember problems with FAT file system in dos?
    If a file has many little extents is this situation good or bad for
    perfomance of file access. I don't thing so.
    Disk I/O operation has a couple of steps:
    positioning of heads.
    finding of sector
    read data/write data
    ...
    if we have many extents in some table, and these extents place
    on different tracks of HDD
    we increase number of mechanical movements of heads
    but this operation dramaticaly slower then usual read data from
    HDD. This is one of the problem with number of extents.
    Doesn't matter we use raids or hdd cache or not.
    this is degradation of perfomance in any way.

  10. #10
    Join Date
    Dec 2000
    Location
    Ljubljana, Slovenia
    Posts
    4,439
    Originally posted by nagarjuna
    It's no big deal to have large no. of extents as long as your queries use indexes. It DOES matter when table goes for FTS (full table scan). When a query goes for FTS, Oracle has to check for next block which may not be continous in the case of many smaller sized extents, but there are chances of finding the continous blocks in the case of few big sized extents.
    It DOES NOT matter even for FTS. This is one of those old myths that symply wouldn't die, no matter how many time it has been proved that they are wrong. The number of extents does not have any practical impact on FTS performance as long as your extents are sized properly (in accordance with block size and db_file_multiblock_read_count). For further reading, check an old, but still excelent and totaly valid whitepaper by Cary V. Millsap "Oracle7 Server Space management" (http://www.orapub.com/papers2//doc104.pdf), particulary the chapter: 7.4.4 Drive Head Motion.

    For those of you who don't want to go through the whole paper, here is a quote from chapter 7.4.4 "Drive Head Motion" that talk about this myth:

    "7.4.4 Drive Head Motion
    Another popular argument against having multiple extents is that a disk drive head execut-ing a full-table scan naturally tends to “sweep” over a disk in its task of reading data. The argument goes, if the head reaches the end of an extent, not only does the Oracle Server software have a job to do (calculate the next extent’s address), the drive head also has to pack up and physically move to a different location from the one at which it happens to be, a wasted mechanical motion that “obviously could have been prevented.” This argument is frequently validated in tests by authors who measure full-scan performance of single- and multi-extent tables.

    Unfortunately for the applicability of the experimental results in OLTP environments, almost all such tests are performed in a carefully controlled environment on a single-user computing system. To understand the profound influence of this assumption on the argument, let’s re-view the mechanics of a full-table scan (we’ll hit the high spots).
    1. A server process will read a batch of blocks from a database file, and those blocks are pinned into the database buffer cache of the SGA.
    2. The server process sequentially reads information from the blocks in memory, processes that information, returns rows, and un-pins the blocks in the SGA.
    Step 2 requires non-zero time, and especially if db_file_multiblock_read_count is large, finishing step 2 gives plenty of time for a competing process on the system to have queued an I/O request of the disk drive that our experimenters assume to be patiently awaiting our next I/O request. Fact is, our disk drive head is just as likely to be close to the first block of a new extent as it is to be close to the next block in our current extent. Experiments on heavily-loaded, operational OLTP computing systems show there to be no measurable difference between full-table scans on large tables with one extent and full-table scans on large tables with many extents."


    See also http://asktom.oracle.com/pls/ask/f?p...0_P8_DISPLAYID,F4950_P8_CRITERIA:730289259844,%7Bnumber%7D%20and%20%7Bof%7D%20and%20%7Bextents%7D

    Originally posted by Shestakov
    if we have many extents in some table, and these extents place on different tracks of HDD we increase number of mechanical movements of heads but this operation dramaticaly slower then usual read data from HDD. This is one of the problem with number of extents.
    Doesn't matter we use raids or hdd cache or not.
    this is degradation of perfomance in any way.
    See the above quote (or better yet, the quoted document). Besides, even if you have the whole table in one single extent, this doesn't neccesserily mean that the data of that table is stored in contigues space on disk! See http://asktom.oracle.com/pls/ask/f?p...0_P8_DISPLAYID,F4950_P8_CRITERIA:1056598515561,%7Bnumber%7D%20and%20%7Bof%7D%20and%20%7Bextents%7D

    So once again: the number of extents is irelevant for performance of the queries. Why we realy don't want to have segments with extremely large number of extents is because of the managebility of such segments and because of the possible negative performance impact on some DDL operations (DROP, TRUNCATE).



    [Edited by jmodic on 08-31-2002 at 10:37 AM]
    Jurij Modic
    ASCII a stupid question, get a stupid ANSI
    24 hours in a day .... 24 beer in a case .... coincidence?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width