-
Bulk insert Memory Usage
Hi,
In the application I am developing, I need to perform a bulk insert into a
table by selecting from another table. I anticipate millions of rows in the
source table.
My question is, will the bulk insert statement try to gather all rows from
the source in memory and then perform the insert?
I want to make sure the operation does not fail due to lack of memory or
lack of rollback segment space.
Is there an option, or hint that I can give in the INSERT command to avoid
running out of memory?
Is there a way I can commit after INSERTing every 10000 records?
Thanks in advance
-
If you commit every 10,000 records you will almost sure to hit a "snapshot too old" error.
The data will not be held in memory before the insert starts.
The most efficient insert mechanism would be to use ...
insert /*+ append */ into my_table
select whatever ...
If you do not have enough rollback space, you could either ...
i) create a real big rollback segment and set your session to use that or
ii) avoid logging the insert (although that would need a full backup afterwards to avoid data loss).
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|