DBAsupport.com Forums - Powered by vBulletin
Results 1 to 2 of 2

Thread: Bulk insert Memory Usage

  1. #1
    Join Date
    Dec 2002
    Posts
    1

    Question Bulk insert Memory Usage

    Hi,

    In the application I am developing, I need to perform a bulk insert into a
    table by selecting from another table. I anticipate millions of rows in the
    source table.

    My question is, will the bulk insert statement try to gather all rows from
    the source in memory and then perform the insert?

    I want to make sure the operation does not fail due to lack of memory or
    lack of rollback segment space.

    Is there an option, or hint that I can give in the INSERT command to avoid
    running out of memory?
    Is there a way I can commit after INSERTing every 10000 records?

    Thanks in advance

  2. #2
    Join Date
    Aug 2002
    Location
    Colorado Springs
    Posts
    5,253
    If you commit every 10,000 records you will almost sure to hit a "snapshot too old" error.

    The data will not be held in memory before the insert starts.

    The most efficient insert mechanism would be to use ...

    insert /*+ append */ into my_table
    select whatever ...

    If you do not have enough rollback space, you could either ...

    i) create a real big rollback segment and set your session to use that or
    ii) avoid logging the insert (although that would need a full backup afterwards to avoid data loss).
    David Aldridge,
    "The Oracle Sponge"

    Senior Manager, Business Intelligence Development
    XM Satellite Radio
    Washington, DC

    Oracle ACE

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width