Commit for every 'n' records in 'insert... select from' .
Hi
Our application has about 3000 tables. This application has a concept called 'company'. Each application company can be related to a physical site of the company. For example, if we have two sites one at London and one at Florida then we have two sets of companies. Hence our application would have 3000 + 3000 = 6000 tables.
Each table name will have the company attached at the end. like 'payroll422', 'payroll522' represent same tables pointing to different sets of companies 422, 522. If a user in London enter data it goes into 'paroll422' and if a user in Florida enters data it goes into 'payroll522' table.
Once in very month I have to populate the data of this companies into 'test' companies for the purpose of testing the new pathes of application. The test database is in a different machine, which is having very less hp (memory and CPU is less). I can create a new set of test companies say 423, 523 in test environment. My application easily creates the 3000 tables and indexes needed for new companies.
Now my job is to populate the data from production companies 422, 522 into 423, 523. I have created database_links between the PROD and TEST database. I have created a small script for replication/copy of data as follows
:
:
stmt_str := 'insert into ' || dst_tab_name || ' select * from ' || tab_name || '@prod' ;
execute immediate stmt_str ;
commit;
:
This script is working OK. It populates data from PROD tables in to TEST tables with one exception. Some of your tables
like 'general_leger422 has abut 3+ million records. The above statement fails (basically it fills out the entire redolog tablespace) if the number of records are huge. It is doing this because I am commiting just once for entire table. So for tables with 3+ million records, I am not commiting untill all records (3+ million) are filled. Redolog is filled and it hangs.
So my question is , is it possible to commit for every 1000 records in the above 'insert into ... select from' statemtn. If so how do I specify this.
Or if there is any other way to accomplish this replication/copy please suggest.
Thanks,
Football
Re: Commit for every 'n' records in 'insert... select from' .
Something does not make sence here. You said:
Quote:
Originally posted by football
This script is working OK. It populates data from PROD tables in to TEST tables with one exception. Some of your tables
like 'general_leger422 has abut 3+ million records. The above statement fails (basically it fills out the entire redolog tablespace) if the number of records are huge.
What exactly do you mean by "redolog tablespace"? No such thing exists in Oracle. Are you talking about *undo* (rollback) tablespace? If yes, then your thoughts about more frequent commit makes some sence.
BUT! What realy confused me is your next remark:
Quote:
So for tables with 3+ million records, I am not commiting untill all records (3+ million) are filled. Redolog is filled and it hangs.
Hangs? Then you are not talking about undo tablespace being filled, as this would not hang, it will simply throw out an error and exit. If the thing realy simply "hangs", than it sounds to me like your archlog destination realy becomes full. If it realy is so, then there are other options, like using UNRECOVERABLE in CTAS. And BTW, in this situation executing more frequent commits will not help in any way - number of commits don't have any influence in the ammount of redo generated.
So which one is it? Does your undo tablespace fills up or is it your archlog destination that becomes full and freezes further database operations?