DBAsupport.com Forums - Powered by vBulletin
Page 2 of 2 FirstFirst 12
Results 11 to 18 of 18

Thread: ORA-01793: maximum number of index columns is 32

  1. #11
    Join Date
    Dec 2005
    Posts
    195
    Dave, It makes more sense to me now. Let me list out the possible solutions to resove this problem. I just want to have better understanding.

    Please let me know if i am wrong.

    1. Normalize the table and reduce the number of columns in the
    GROUP BY clause.

    2. Create a tablespace with highest possible DB_BLOCK_SIZE for this table.

    3. Using by "USING NO INDEX" clause.

  2. #12
    Join Date
    Aug 2002
    Location
    Colorado Springs
    Posts
    5,253
    If there is benefit to your users in including the additional columns then go ahead with them, and use the USING NO INDEX clause to allow you to define your own index using a permitted number of columns.

    The GROUP BY limitation is a different matter, though. If you have to use a larger block size to include those additional columns then that might be more hassle than I would be prepared to suffer.

    The two restrictions are independent of each other.
    David Aldridge,
    "The Oracle Sponge"

    Senior Manager, Business Intelligence Development
    XM Satellite Radio
    Washington, DC

    Oracle ACE

  3. #13
    Join Date
    Sep 2005
    Posts
    278
    Good to know about "using no index" clause.

    I did a small test, which is not more relevent now after all the posts but maybe helpful to others in future.

    Block size of the database is 8k

    Code:
    create table test_grp(
    col1	char(2000),
    col2	char(2000),
    col3	char(2000),
    col4	char(2000),
    col5	char(2000),
    COLNUM	NUMBER
    );
    /
    
    Insert into test_grp values('a','b','c','d','e',1);
    /
    
      1  SELECT COL1, COL2, COL3, col4, col5, SUM(COLNUM)
      2  FROM TEST_GRP
      3* GROUP BY COL1, COL2, COL3, col4 ,col5
    SQL> /
    FROM TEST_GRP
         *
    ERROR at line 2:
    ORA-01467: sort key too long
    
      1  SELECT COL1, COL2, COL3, col4, SUM(COLNUM)
      2  FROM TEST_GRP
      3* GROUP BY COL1, COL2, COL3, col4
    
    It works.
    Now I test with using varchar2 column.

    Code:
    create table test_grpV2(
    col1	VARchar2(2000),
    col2	VARchar2(2000),
    col3	VARchar2(2000),
    col4	VARchar2(2000),
    col5	VARchar2(2000),
    col6	VARchar2(2000),
    col7	VARchar2(2000),
    col8	VARchar2(2000),
    col9	VARchar2(2000),
    col10	VARchar2(2000),
    col11	VARchar2(2000),
    col12	VARchar2(2000),
    col13	VARchar2(2000),
    col14	VARchar2(2000),
    col15	VARchar2(2000),
    col16	VARchar2(2000),
    col17	VARchar2(2000),
    col18	VARchar2(2000),
    col19	VARchar2(2000),
    col20	VARchar2(2000),
    col21	VARchar2(2000),
    col22	VARchar2(2000),
    col23	VARchar2(2000),
    col24	VARchar2(2000),
    col25	VARchar2(2000),
    col26	VARchar2(2000),
    col27	VARchar2(2000),
    col28	VARchar2(2000),
    col29	VARchar2(2000),
    col30	VARchar2(2000),
    col31	VARchar2(2000),
    col32	VARchar2(2000),
    col33	VARchar2(2000),
    col34	VARchar2(2000),
    col35	VARchar2(2000),
    col36	VARchar2(2000),
    col37	VARchar2(2000),
    col38	VARchar2(2000),
    col39	VARchar2(2000),
    col40	VARchar2(2000),
    COLNUM	NUMBER
    );
    /
    
    
    INSERT INTO TEST_GRPV2
    VALUES('A','B','C','D','E','F','G','H','I','J','K','L',
    'M','N','O','P','Q','R','S','T',
    'A','B','C','D','E','F','G','H','I','J','K','L',
    'M','N','O','P','Q','R','S','T',1);
    /
    
    SELECT 
    COL1, COL2, COL3, col4, col5, col6,col7,
    col8,col9,col10,col11,col12,col13,col14,col15,
    col16,col17,col18,col19,col20, 
    col21,col22,col23,col24,col25,coL26,coL27,
    coL28,coL29,col30,
    COL31, COL32, COL33, coL34, coL35, coL36,
    col27,col28,col39,col40,
    SUM(COLNUM)
    FROM TEST_GRPv2
    GROUP BY 
    COL1, COL2, COL3, col4, col5, col6,col7,
    col8,col9,col10,col11,col12,col13,col14,col15,
    col16,col17,col18,col19,col20, 
    col21,col22,col23,col24,col25,coL26,coL27,
    coL28,coL29,col30,
    COL31, COL32, COL33, coL34, coL35, coL36,
    col27,col28,col39,col40
    /
    which works fine.

    Here my point its not about the number of columns in group by clause or with materalized view. Oracle creates a Unique Index on group by columns which has a restriction of total length of BLOCK SIZE.

  4. #14
    Join Date
    Sep 2005
    Posts
    278
    Normalizing advantage

    Code:
    HR_Financial_Hierarchy
    ID (HR_BASELINE.PrimaryKey...if any such column for join),
    FH_ID Number, (to hold 1,2,3,..10)
    FH_Val Number or Varchar2 (wat ever be)
    )
    
    HR_EC(
    ID (HR_BASELINE.PrimaryKey...if any such column for join),
    EC_ID Number (1..7)
    EC_Val Number)
    then you can query, as below

    Code:
    SELECT
    ........
    decode(HR_Financial_Hierarchy.FH_ID, 1, FH_Val)FH1,
    decode(HR_Financial_Hierarchy.FH_ID, 2, FH_Val)FH2,
    ........
    decode(HR_EC.EC_ID, 1, EC_Val) EC1,
    decode(HR_EC.EC_ID, 2, EC_Val) EC2,
    decode(HR_EC.EC_ID, 3, EC_Val) EC3,
    ........
    FROM HR_BASELINE, HR_Financial_Hierarchy, HR_EC
    WHERE HR_BASELINE.ID = HR_Financial_Hierarchy.ID
    HR_BASELINE.ID = HR_EC.ID
    ........
    GROUP BY ................
    HR_Financial_Hierarchy.ID, Financial_Hierarchy_Val,
    HR_EC.EC_ID, HR_EC.EC_VAL,
    ...............................
    This query will have some performance overhead becoz of joins. But
    I think grouping on 10 columns of Financial_Hierarchy_1..10 and &
    8 columns of EC1..Ec8 consumes time than grouping 4 columns
    HR_Financial_Hierarchy.ID, Financial_Hierarchy_Val,
    HR_EC.EC_ID, HR_EC.EC_VAL

    And regarding the Block Size, a Single record of HR_BASELINE consumes more than 1 block, that means lot of block overhead with rowchaining. If having a block size which can fit 1 or 2 records in a single block then it will reduce I/O.

    If you are accessing 4 rows from HR_BASELINE which means almost
    reading 6-8 blocks if u increase the block size you can reduce the
    number of blocks need for accessing those same 4 rows.

    Hope its a valid point to both normalize and increasing block size of
    the tablespace which holds HR_BASELINE table.

  5. #15
    Join Date
    Aug 2002
    Location
    Colorado Springs
    Posts
    5,253
    Quote Originally Posted by tabreaz
    This query will have some performance overhead becoz of joins. But I think grouping on 10 columns of Financial_Hierarchy_1..10 and & 8 columns of EC1..Ec8 consumes time than grouping 4 columns HR_Financial_Hierarchy.ID, Financial_Hierarchy_Val,HR_EC.EC_ID, HR_EC.EC_VAL
    Whether you want to do this or not depends on the trade-off between time spent on maintaining the MV and time spent on querying the result. If the emphasis is to get results for the end user as fast as possible then it makes sense to include the full hierarchy of values and take the hit on the MV maintenance, especially if the MV is extensively used.
    David Aldridge,
    "The Oracle Sponge"

    Senior Manager, Business Intelligence Development
    XM Satellite Radio
    Washington, DC

    Oracle ACE

  6. #16
    Join Date
    Sep 2005
    Posts
    278
    Hi dave,

    Whether you want to do this or not depends on the trade-off between time spent on maintaining the MV and time spent on querying the result. If the emphasis is to get results for the end user as fast as possible then it makes sense to include the full hierarchy of values and take the hit on the MV maintenance, especially if the MV is extensively used.
    I got your point but how about grouping the result set on 18 columns rather than 4 columns + join (with proper index)..may be the cost of both options are equal (what you say)? and through decode statement all required columns can be used.

  7. #17
    Join Date
    Aug 2002
    Location
    Colorado Springs
    Posts
    5,253
    It's the sort of thing that would be amenable to testing.

    My first instinct, coming from a DW background, would be to materialize all the required values myself. There are a great many optimizations that could be applied to the refresh of a summary table like this, and some of them would allow materialization of all values without grouping by them all.

    For example:
    Code:
    with agg_data as 
       (select
           date_of_month,
           sum(sales_amt) s_amt
        from
           fct_sales
        group by
           date_of_month)
    select
       ad.date_of_month,
       d.date_of_quarter,
       d.date_of_year,
       ad.s_amt
    from
       agg_data ad,
       dim_date_of_month d
    where
       ad.date_of_month = d.date_of_month
    You couldn't define your MV like that, but you could certainly use such code as the basis for a faster refresh than the native MV method.
    David Aldridge,
    "The Oracle Sponge"

    Senior Manager, Business Intelligence Development
    XM Satellite Radio
    Washington, DC

    Oracle ACE

  8. #18
    Join Date
    Sep 2005
    Posts
    278
    Thanks dave, Its really a nice tip. I know WITH Statement but I never got any idea for using such.

    How to improve productivity??????

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width