DBAsupport.com Forums - Powered by vBulletin
Results 1 to 10 of 10

Thread: Commit for every 'n' records in 'insert... select from' .

Hybrid View

  1. #1
    Join Date
    Oct 2002
    Posts
    10

    Commit for every 'n' records in 'insert... select from' .

    Hi

    Our application has about 3000 tables. This application has a concept called 'company'. Each application company can be related to a physical site of the company. For example, if we have two sites one at London and one at Florida then we have two sets of companies. Hence our application would have 3000 + 3000 = 6000 tables.

    Each table name will have the company attached at the end. like 'payroll422', 'payroll522' represent same tables pointing to different sets of companies 422, 522. If a user in London enter data it goes into 'paroll422' and if a user in Florida enters data it goes into 'payroll522' table.

    Once in very month I have to populate the data of this companies into 'test' companies for the purpose of testing the new pathes of application. The test database is in a different machine, which is having very less hp (memory and CPU is less). I can create a new set of test companies say 423, 523 in test environment. My application easily creates the 3000 tables and indexes needed for new companies.

    Now my job is to populate the data from production companies 422, 522 into 423, 523. I have created database_links between the PROD and TEST database. I have created a small script for replication/copy of data as follows

    :
    :
    stmt_str := 'insert into ' || dst_tab_name || ' select * from ' || tab_name || '@prod' ;
    execute immediate stmt_str ;
    commit;
    :

    This script is working OK. It populates data from PROD tables in to TEST tables with one exception. Some of your tables
    like 'general_leger422 has abut 3+ million records. The above statement fails (basically it fills out the entire redolog tablespace) if the number of records are huge. It is doing this because I am commiting just once for entire table. So for tables with 3+ million records, I am not commiting untill all records (3+ million) are filled. Redolog is filled and it hangs.

    So my question is , is it possible to commit for every 1000 records in the above 'insert into ... select from' statemtn. If so how do I specify this.

    Or if there is any other way to accomplish this replication/copy please suggest.

    Thanks,
    Football

  2. #2
    Join Date
    Aug 2002
    Location
    Colorado Springs
    Posts
    5,253
    What a terrible way to design a database.

    That aside, I would look at using the SQL*Plus COPY command instead of db_links, which relieves you of the burden of db links, and does exactly what you want with regard to a commit interval.

    See the SQL*Plus Reference for details of COPY, but the basics of it are that in SQL*Plus you woul issue a commands like ...



    set arraysize 100
    set copycommit 10 -- meaing commit every 10 arrays of 100 rows
    copy from scott/tiger@sourcedb.world to scott/tiger@destdb.world-
    append payroll522 using select * from payroll522

    Note the "-" sign at the end of the line beginning "copy ...". It is a continuation character in SQL*Plus
    David Aldridge,
    "The Oracle Sponge"

    Senior Manager, Business Intelligence Development
    XM Satellite Radio
    Washington, DC

    Oracle ACE

  3. #3
    Join Date
    Aug 2002
    Location
    Atlanta
    Posts
    1,187
    committing evey X records will severly affect performance, the more you commit the worse the performance

    here's a simple illustration you can try

    REM =============================================
    REM Set timing on and commit after every insert
    REM Note the time it takes to complete
    REM =============================================


    drop table t1
    /
    create table t1 (col1 number)
    /
    set timing on
    set serveroutput on
    begin
    for i in 1..9999 loop
    insert into t1 values(i);
    commit;
    end loop;
    end;
    /



    REM =============================================
    REM Set timing on and commit after all rows have
    REM been inserted.
    REM Note the time it takes to complete
    REM =============================================


    drop table t1
    /
    create table t1 (col1 number)
    /
    set timing on
    set serveroutput on
    begin
    for i in 1..9999 loop
    insert into t1 values(i);
    end loop;
    commit;
    end;
    /
    I'm stmontgo and I approve of this message

  4. #4
    Join Date
    Aug 2002
    Location
    Colorado Springs
    Posts
    5,253
    That's true stmontgo, but recall that ...

    i) this is a data transfer to a different machine, with network overhead included as well
    ii) it's only done once per month
    iii) the target is a test database -- I'm guessing it's not sized as a production system, nor is the increased load due to frequent committing going to affect anyone else.

    So although you are quite, this may be a circumstance where it would be an acceptable sacrifice.

    By the way football, one of the side effects of using the COPY command in this way is that you can put an ORDER BY on the select statement, and if your COPY fails on a big data transfer you can determine where it failed (by querying the target table) and start off the copy again half way through by putting a condition on the SELECT. This works best when you ORDER by a very selective column, and can get Oracle to use an index to get the rows in the correct order -- you don't want to be doing large sorts on your production database, I'm thinking
    David Aldridge,
    "The Oracle Sponge"

    Senior Manager, Business Intelligence Development
    XM Satellite Radio
    Washington, DC

    Oracle ACE

  5. #5
    Join Date
    Nov 2000
    Location
    Pittsburgh, PA
    Posts
    4,166
    Originally posted by slimdave
    What a terrible way to design a database.
    I agree with SlimDave on this. Have you thought about changing your schema to use just one set of tables, partition them based on company, and then use Oracle row level security to control who can see what company.

    You might also look into SQL Loader for moving the data. You could transform the data as you take it out of the database, compress the file and then ftp it to your test system.

  6. #6
    Join Date
    Dec 2000
    Location
    Ljubljana, Slovenia
    Posts
    4,439

    Re: Commit for every 'n' records in 'insert... select from' .

    Something does not make sence here. You said:
    Originally posted by football
    This script is working OK. It populates data from PROD tables in to TEST tables with one exception. Some of your tables
    like 'general_leger422 has abut 3+ million records. The above statement fails (basically it fills out the entire redolog tablespace) if the number of records are huge.
    What exactly do you mean by "redolog tablespace"? No such thing exists in Oracle. Are you talking about *undo* (rollback) tablespace? If yes, then your thoughts about more frequent commit makes some sence.

    BUT! What realy confused me is your next remark:
    So for tables with 3+ million records, I am not commiting untill all records (3+ million) are filled. Redolog is filled and it hangs.
    Hangs? Then you are not talking about undo tablespace being filled, as this would not hang, it will simply throw out an error and exit. If the thing realy simply "hangs", than it sounds to me like your archlog destination realy becomes full. If it realy is so, then there are other options, like using UNRECOVERABLE in CTAS. And BTW, in this situation executing more frequent commits will not help in any way - number of commits don't have any influence in the ammount of redo generated.

    So which one is it? Does your undo tablespace fills up or is it your archlog destination that becomes full and freezes further database operations?
    Last edited by jmodic; 11-06-2002 at 10:19 AM.
    Jurij Modic
    ASCII a stupid question, get a stupid ANSI
    24 hours in a day .... 24 beer in a case .... coincidence?

  7. #7
    Join Date
    Oct 2002
    Posts
    10

    Here I am...

    Hi

    I am going to try SLIMDAVE's recommendation to use copy command. I looked into the SQL*Plus document about this command. It seems to suite my requirement.

    Well, My (ERP)application creates the tables with company number attached to the tables' names. There is nothing much I can do about it.

    As STMONTGO pointed out the regular/frequent commits may affect performance. But I don't see any other choice.

    JMODIC, Well, I was wrong when I said redolog tablespace. I meant rollback tablespace. We do switch off archive logging while doing this copy. So there is no question of archive logs filling up the disk space. The situation here is the rollback tablespace fills up and it really hangs (at the application side) and when I see the alertlog there is a message/error "

    Mon Oct 07 10:12:03 2002
    ORA-1650: unable to extend rollback segment RB1 by 64 in tablespace ROLLBACK_DATA

    Failure to extend rollback segment 2 because of 1650 condition
    FULL status of rollback segment 2 set.

    So I killed the copy process.

    Jurij, I am not really into Skydiving sport....

    I won't be using the ORDERBY in copy...select statement right now. I would introduce it once the copy is functional.

    Thanks for your responses....

    -Football

  8. #8
    Join Date
    Nov 2000
    Location
    Baltimore, MD USA
    Posts
    1,339

    Re: Here I am...

    Originally posted by football
    Jurij, I am not really into Skydiving sport....
    Uh, you do realize that is just part of his signature, don't you

    BTW, considering what we do know about this model, I shudder to think about why they might actually have 3000! tables.

    - Chris
    Christopher R. Long
    ChrisRLong@HotMail.Com
    But that's just my opinion. I could be wrong

  9. #9
    Join Date
    Nov 2000
    Location
    greenwich.ct.us
    Posts
    9,092

    Re: Re: Here I am...

    Originally posted by chrisrlong
    BTW, considering what we do know about this model, I shudder to think about why they might actually have 3000! tables.
    Because Oracle can only have 1000 columns and they needed:
    PAY_DATE_2201, PAY_AMOUNT_2201, PAY_DATE_2202, PAY_AMOUNT_2202, PAY_DATE_2203, PAY_AMOUNT_2203, PAY_DATE_2204, PAY_AMOUNT_2204, etc. to implement multiple pay_dates and pay_amounts...
    Jeff Hunter

  10. #10
    Join Date
    Nov 2000
    Location
    Pittsburgh, PA
    Posts
    4,166

    Re: Re: Re: Here I am...

    Originally posted by marist89
    Because Oracle can only have 1000 columns and they needed:
    PAY_DATE_2201, PAY_AMOUNT_2201, PAY_DATE_2202, PAY_AMOUNT_2202, PAY_DATE_2203, PAY_AMOUNT_2203, PAY_DATE_2204, PAY_AMOUNT_2204, etc. to implement multiple pay_dates and pay_amounts...
    I agree a fully denormalized schema is the mark of every good programmer.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width