Reorganization of Table
DBAsupport.com Forums - Powered by vBulletin
Page 1 of 2 12 LastLast
Results 1 to 10 of 20

Thread: Reorganization of Table

  1. #1
    Join Date
    May 2001
    Location
    Delhi
    Posts
    340
    Hi,

    As it is said that Many Extents of a segment doesn't degrade the performance. Then for what reasons reorganization is done. Just to recollect the waste space....

    I have a table of size 17 GB, with 41973583 Rows, 247 extents.

    Earlier its Storage parameters were set as Intial 16 KB, Next 16 KB PCT increase 0. SAP calculates the next extent and its current next extent is 1310720 KB.

    Monthly growth of the table is approx 1400000 KB. Generally only inserts are happening in this table. Therefore I don't expect much waste space.

    Shall I reorganize this table with Initial 1310720, next 1310720. In this way its extents will be reduced, what other gains I'll have.

    Since I m running on 8i, therefore no limitations of extents is there.

    Do I need to Reorganize this table.

    Vijay
    --------------------------
    The Time has come ....

  2. #2
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    Definitely, you need to reorg this big table (17GB).
    More number of extents on a object degrades over all performance.
    I had conducted various tests on the extent management.
    Oracle performs better when ever the number of extents are very few because it requires less number of latches on a object.

    If you want, do the testing as given below:

    Step 1:
    -- create a table that stores statistics
    --
    create table stat_info
    ( runid varchar2(15), name varchar2(64) , value number )
    tablespace users
    storage (initial 2m next 2m pctincrease 0);

    Step 2:

    -- create a view on the V$ views
    create or replace view stat_info_vw as
    select 'STAT....'||a.name "NAME",
    b.value "VALUE"
    from v$statname a , v$mystat b
    where a.statistic# = b.statistic#
    union
    select 'LATCH...'||c.name "NAME",
    c.gets "VALUE"
    from v$latch c
    ;

    Step 3:

    -- create a test table and populate some data
    create table test3
    (id number, name varchar2(30), state varchar2(2))
    tablespace users
    storage (initial 10m next 10m pctincrease 0);

    begin
    for K in 1..2000000 Loop
    insert into test3 values (K, 'Testing','NY');
    end loop;
    commit;
    end;
    /

    Step 4:

    set timing on

    create table test1
    tablespace users
    storage (initial 16k next 16k pctincrease 0)
    as select * from test3
    ;

    create table test2
    tablespace users
    storage (initial 10m next 10m pctincrease 0 )
    as select * from test3
    ;

    Step 5:
    -- run pl/sql block
    set serveroutput on size 50000
    declare
    l_time_1 number ;
    l_time_2 number ;
    l_time_3 number ;
    begin
    execute immediate 'truncate table stat_info' ;

    insert into stat_info
    select 'Before', a.* from stat_info_vw a ;
    l_time_1 := dbms_utility.get_time ;

    execute immediate 'truncate table test1' ;

    insert into stat_info
    select 'Run 1', a.* from stat_info_vw a ;
    l_time_2 := dbms_utility.get_time ;

    execute immediate 'truncate table test2' ;

    insert into stat_info
    select 'Run 2', a.* from stat_info_vw a ;
    l_time_3 := dbms_utility.get_time ;

    commit;
    dbms_output.put_line('Time to complete for Run 1 '||to_char(l_time_2 - l_time_1));
    dbms_output.put_line('Time to complete for Run 2 '||to_char(l_time_3 - l_time_2));
    end;
    /

    Step 6:
    -- run a report
    column name format a55
    select a.name, a.value "Before",
    b.value "Run 1 Value",
    c.value "Run 2 Value",
    (b.value - a.value) "Run 1",
    (c.value - b.value) "Run 2",
    ((c.value - b.value) - (b.value - a.value)) diff
    from stat_info a , stat_info b, stat_info c
    where a.name = b.name and
    b.name = c.name and
    a.runid = 'Before' and
    b.runid = 'Run 1' and
    c.runid = 'Run 2' and
    ((c.value - b.value) - (b.value - a.value)) <> 0
    order by ((c.value - b.value) - (b.value - a.value))
    ;

    From the report, you should be in a position to understand why an object requires few extents. In the above example, I used "truncate" command. You can change it to any
    other command as you like it. Ex: SELECT COUNT(*) FROM tablename;

    The number of latches required for truncating a table that has few extents are very less than that of a table that has many extents.



  3. #3
    Join Date
    Jul 2000
    Posts
    521
    The way you access data (SELECTs) in this table should be considered for doing a reorg on this table. If its indexed search using a highly selective index, you can live with 247 extents. And, now that the NEXT is set to 1G+, it will not grow as rapidly in future as it did in the past.

    If you want to be more diciplined @ your objects and storages though, I'll say reorg it.
    svk

  4. #4
    Join Date
    Dec 2000
    Location
    Ljubljana, Slovenia
    Posts
    4,439
    Very impressive test, but onlyfor those who don't have a clue what the test is trying to demonstrate .

    The most amusing part is at the end:
    Originally posted by tamilselvan
    From the report, you should be in a position to understand why an object requires few extents. In the above example, I used "truncate" command. You can change it to any
    other command as you like it. Ex: SELECT COUNT(*) FROM tablename;

    The number of latches required for truncating a table that has few extents are very less than that of a table that has many extents.
    Would someone be so kind and explain what kind of latches are required for performing "SELECT COUNT(*) FROM tablename;"???? What does latches have to do with the above statement???? So how can number of extents affect the performance of the selects?

    This test is a typical examples of why some of the old myths about Oracle simply refuse to dissapear in the junkyard of the history. Someone performs a test that returns perfectly legitimate results, but if you interpret those results totaly wrong - voila, "here is a proof that many extents are bad for performance". This topic has been beaten to death by many many experts, the internet is flooded with whitepapers that offers theoretical and empirical proofs that the number of extents is irrelevant when performance is concerned.

    It is selfunderstandable that truncating/dropping a table with many extents will take significantly more time as to truncate/drop table with few extents. It is not only because of latches, it is also because of the other activities reqired in maintaining data dictionary for that operation. But do you measure the term "performance" by such exotic actions as dropping or truncating large tables?

    And about advice that "17 GB table should definitely be reorged because it has reached 247 extents" - my advice to vijay would be: "Do you have any particular reasons for reorganisation of this table? What are they, very specificaly?" If there are none, then why would you like to waste so many of time and resources for it?

    I particulary liked the following question in vijay's post:
    "Then for what reasons reorganization is done?" I'd say in 95% of cases it is done only because DBBSs must do something to show to their bosses to justify their job possitions and their sallaries. In 99% of those 95% cases unfortunately those same DBBSs don't have a clue that they are just wasting time and their computer resources. They actually think that those rorgs are the prime reason why their databases are in such an excelent shape .
    Jurij Modic
    ASCII a stupid question, get a stupid ANSI
    24 hours in a day .... 24 beer in a case .... coincidence?

  5. #5
    Join Date
    Aug 2002
    Location
    Atlanta
    Posts
    1,187
    ok alot of exents may impact ddl operations nut really how often do you issue drop table in production, however, if you are getting paid by the hour go nuts
    I'm stmontgo and I approve of this message

  6. #6
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    Run 1 is the collection of statistics for the table test1 that has 1500 extents and Run 2 for the table test2 that has 12 extents.
    Both the table have same number of rows.
    The test was conducted after I changed my code from truncate to "select":
    Those who want to know what are latches acquired during the "SELECT" can go through the report.

    The listing is:

    set serveroutput on size 50000
    declare
    l_time_1 number ;
    l_time_2 number ;
    l_time_3 number ;
    l_cnt Number ;
    begin
    execute immediate 'truncate table stat_info' ;
    insert into stat_info
    select 'Before', a.* from stat_info_vw a ;
    l_time_1 := dbms_utility.get_time ;
    execute immediate 'select count(*) from test1' into l_cnt ;
    insert into stat_info select 'Run 1', a.* from stat_info_vw a ;
    l_time_2 := dbms_utility.get_time ;
    execute immediate 'select count(*) from test2' into l_cnt ;
    insert into stat_info select 'Run 2', a.* from stat_info_vw a ;
    l_time_3 := dbms_utility.get_time ;
    commit;
    dbms_output.put_line('Time to complete for Run 1 '||to_char(l_time_2 - l_time_1));
    dbms_output.put_line('Time to complete for Run 2 '||to_char(l_time_3 - l_time_2));
    end;
    /

    NAME Run 1 Run 2 DIFF
    ----------------------------------- ---------- ----------
    STAT....prefetched blocks 3553 6666 3113
    LATCH...redo writing 12 13 1
    LATCH...list of block allocation 1 0 -1
    LATCH...transaction allocation 1 0 -1
    STAT....free buffer requested 7114 7113 -1
    STAT....enqueue releases 3 2 -1
    LATCH...shared pool 145 143 -2
    STAT....enqueue requests 4 2 -2
    LATCH...checkpoint queue latch 221 218 -3
    LATCH...enqueue hash chains 7 4 -3

    NAME Run 1 Run 2 DIFF
    ----------------------------------- ---------- ----------
    LATCH...library cache 294 291 -3
    LATCH...redo allocation 20 17 -3
    STAT....redo entries 17 13 -4
    LATCH...enqueues 14 9 -5
    STAT....db block changes 28 22 -6
    STAT....recursive cpu usage 498 483 -15
    STAT....consistent gets 7178 7128 -50
    STAT.calls to get snapshot scn: 62 11 -51
    kcmgss

    LATCH...undo global data 58 6 -52

    NAME Run 1 Run 2 DIFF
    ----------------------------------- ---------- ----------
    STAT....db block gets 497 32 -465
    STAT....session logical reads 7675 7160 -515
    STAT....redo size 21924 21192 -732
    LATCH...cache buffers chains 22507 21467 -1040
    LATCH...multiblock read object 7106 890 -6216

    25 rows selected.

    I am not talking about DISK I/O performances and throughput. When a table has many extents, Oracle has to do a lot more latches as you see from the above listing.
    Keep in mind that latch means locks, locks means serialization device and serialization means less concurrency and less concurrency means not scaleable.

    Lot of people do not do any test. They simply follow what the WEB says.
    For the young aspirant DBAs, do listen what others say, but do not accept and agree the points unless you yourself do the test.



    [Edited by tamilselvan on 09-19-2002 at 03:56 PM]

  7. #7
    Join Date
    Nov 2000
    Location
    Baltimore, MD USA
    Posts
    1,339
    Okay, if I'm reading this right, we have 2 tables, one with 125 *times* as many extents as the other. In percentage terms, that's 124,000% larger.

    Now, this 124,000% difference nets us the following percentages:

    Code:
    STAT....prefetched blocks		 -46%
    LATCH...redo writing		   8%
    LATCH...list of block allocation	  Undefined - went from 0 to 1
    LATCH...transaction allocation	  Undefined - went from 0 to 1
    STAT....free buffer requested	   0.01%
    STAT....enqueue releases		  50% (1 more latch)
    LATCH...shared pool		   1%
    STAT....enqueue requests		 100% (2 more latches)
    LATCH...checkpoint queue latch	   1%
    LATCH...enqueue hash chains		  75% (3 more latches)
    LATCH...library cache 		   1%
    LATCH...redo allocation 		  17% (3 more latches)
    STAT....redo entries		  30% (4 more entries)
    LATCH...enqueues			  55% (5 more latches)
    STAT....db block changes		  27% (6 more blocks)
    STAT....recursive cpu usage		   3% 
    STAT....consistent gets		   0.7%
    STAT.calls to get snapshot scn:	 463% 
    LATCH...undo global data 		 866%
    STAT....db block gets 		1453%
    STAT....session logical reads 	   7%
    STAT....redo size			   3%
    LATCH...cache buffers chains 	   5%
    LATCH...multiblock read object 	 698%
    So, the worst percentage change above is 1453%. This means that when the extents changed by 124,000%, our worst stat changed by 1453% - that reduces to meaning that for every 100% change in your extents, you can expect a 1% change in this stat.

    I'm going with jmodic on this one (as usual ) - it doesn't apper to be a terribly worthwhile concern.

    - Chris

    [Edited by chrisrlong on 09-19-2002 at 04:33 PM]
    Christopher R. Long
    ChrisRLong@HotMail.Com
    But that's just my opinion. I could be wrong

  8. #8
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    ===============================================
    Chris,
    Okay, if I'm reading this right, we have 2 tables, one with 125 *times* as many extents as the other. In percentage terms, that's 124,000% larger.
    ===============================================
    Before you write some thing, test it one more time.

    You need to do your math correctly.
    It is not 124,000%. It is 12400 %.
    Can you check up all other percentage?



  9. #9
    Join Date
    Dec 2000
    Location
    Ljubljana, Slovenia
    Posts
    4,439
    Tamilselvan,

    We vere talking about performance, yet you cutted the dbms_output of the timing results of both runs. Any particular reason? Maybe the fact that the timing shows no difference between the two? (And if you are claiming that run1 was noticably slower than run2 - or vice versa - then you have a problem elsevere, not directly in the number of extents!)

    Not that those timings realy matter, though. You can allways say: "That was a test in a lab environment whit not many (if any) concurent sessions running. It is the latches thing that realy matter, because latches cause waits (serialisation) and that means unscalable and that means bad performance". Guess what - IMHO those latch gets statistics are meaningless! Why? Because measuring latch *gets* doesn't shows anything about contention! None. Zero. If you were only measuring misses or sleeps or spin_gets, that might tell us something. But of course, in a single user environment you will hardly find any of those. So your test proofs absolutely nothing about latch contention. For this to proove, you will have to run your tests on a very bussy system and measure the impact of this queries on wait events. Then come up with the numbers about sleeps and misses for those latches. I'm sure you'll find those latch gets don't have any impact on the scalability/performance.

    I'm repeating my base line: You are interpreting the results of your tests wrong and from there on there is a very short way to totaly wrong conclusions...

    I understand you are not too fond of whitepapers and books in which world-recognized experts (Adams, Kyte, Millsap, Lewis,...) explain (with facts) about the impact of multiextents on performance, but I realy recommend you to read the following article from Steve Adams: http://www.ixora.com.au/tips/creation/extents.htm

    It is very unbiased article, he doesn't say that extremely high nuber of extents is a good thing, but he also explain what about it is not good: It is only because of the maintainance work that has to be done in dictionary because of that. No impact on queries! None. Zero.

    P.S. If I'm not mistaken, the original question was about rebuilding 150-ish, not 1500-ish extent table. So if you'll bother to perform further tests on a bussy system, try with something in that range. It is well known from the literature that about 1.000 extents is the highest limit where you should realy start worrying about it. Again, not because of the query performances, but because of management isues. Anyway, performing tests with thousands of extents has no particular purpose.

    [Edited by jmodic on 09-19-2002 at 07:44 PM]
    Jurij Modic
    ASCII a stupid question, get a stupid ANSI
    24 hours in a day .... 24 beer in a case .... coincidence?

  10. #10
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    ==========================================
    JMODIC
    We vere talking about performance, yet you cutted the dbms_output of the timing results of both runs. Any particular reason? Maybe the fact that the timing shows no difference between the two? (And if you are claiming that run1 was noticably slower than run2 - or vice versa - then you have a problem elsevere, not directly in the number of extents!)
    ========================================
    SQL>
    set timing on
    set autotrace on statistics

    set serverout on size 50000

    drop table test2 ;

    drop table test1;

    alter tablespace users coalesce ;


    create table test1
    tablespace users storage (initial 128k next 128k pctincrease 0)
    as select * from tamil_test
    ;
    create table test2
    tablespace users storage (initial 5m next 5m pctincrease 0 )
    as select * from tamil_test
    ;


    Table dropped.

    Elapsed: 00:00:00.19

    Table dropped.

    Elapsed: 00:00:06.40

    Tablespace altered.

    Elapsed: 00:00:06.04

    Table created.

    Elapsed: 00:00:35.78 --- for table test1 that has 356 extents

    Table created.

    Elapsed: 00:00:28.28 --- for table test2 that has 12 extents
    SQL>
    set serveroutput on size 50000
    declare
    l_time_1 number ;
    l_time_2 number ;
    l_time_3 number ;
    l_cnt Number ;
    begin

    execute immediate 'truncate table stat_info' ;

    insert into stat_info select 'Before', a.* from stat_info_vw a ;
    l_time_1 := dbms_utility.get_time ;

    execute immediate 'truncate table test1' ;

    insert into stat_info select 'Run 1', a.* from stat_info_vw a ;
    l_time_2 := dbms_utility.get_time ;

    execute immediate 'truncate table test2' ;

    insert into stat_info select 'Run 2', a.* from stat_info_vw a ;
    l_time_3 := dbms_utility.get_time ;

    commit;
    dbms_output.put_line('Time to complete for Run 1 '||to_char(l_time_2 - l_time_1));
    dbms_output.put_line('Time to complete for Run 2 '||to_char(l_time_3 - l_time_2));
    end;
    /
    SQL> @rec_call
    Time to complete for Run 1 324
    Time to complete for Run 2 60

    PL/SQL procedure successfully completed.

    Elapsed: 00:00:04.19
    SQL>

    Table test1 is created in 35.78 sec, and test2 in 28.28 secs.
    From the above testing, truncating a table test2 in 60/100 seconds, and 324/100 secs for the table test1.

    FRAGMENTATION:
    There are 4 types of fragmentation that cause slow performance in any system.
    1. Row Levele fragmentation
    2. Block Level fragmentation
    3. Segment Level fragmentation
    4. Tablespace level fragmentation.

    Having a larger number extents slows down the performance when a segment is accessed via full table scans (full index scans), because of extent mapping stored in the data dictionary. One can easily figure the number of recursive calls made during the full segment scans.


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  



Click Here to Expand Forum to Full Width