I am currently in the middle of a data conversion project, and noticed something that surprised me a lot. I was inserting approx 1.1 million rows into a table, and after these inserts the table consumed approx 64 MB of data and 40 MB of index in the database. We have archivelog mode turned on, and the above operation generated approx 550 MB of archive logs !!
Is this reasonable ? Are there any settings I can change to reduce this (whilst still remaining in archivelog mode).
The operation is performed in a Package. Because the transaction was too big to do in a single INSERT INTO .. SELECT .., I use dbms_sql to fetch from the SELECT 100 rows at a time into a bunch of variables of type dbms_sql.xxx_table. I then insert them in one go, again using dbms_sql with the same bunch of variables. I then commit after every 10000 rows inserted.
The database version is 188.8.131.52 (on Linux if that matters). The DB is configured to create log files of 50 MB.
There are no updates, no triggers, and no other activity on the DB.