I am using BULK COLLECT and BULK UPDATE in a stored proc which is selecting 1.5 Million records and updating the same. When i run this stored proc i am getting the below error
ORA-04030: out of process memory when trying to allocate 283468
I am runing oracle 188.8.131.52, solaris 2.8 and have PHYSICAL RAM of 22 GB.
I went to metalink and found some solutions like increase the ulimit UNIX RESOURCE to the process and reducing the SORT_AREA_SIZE.
I tried this but still its failing.
ulimit -a on my unix box is
stack(kbytes) 119200 ----this was earlier 8KB
and sort_area_size for the session is set to 1 MB.
Can anyone of you suggest me a solution if you had encountered the same problem and fixed it.
when you use BULK COLLECT you should limit your arraysize, i.e instead of collecting 1.5 million rows and process it you should do array fetch of 100
FETCH ... BULK COLLECT INTO ... [LIMIT rows];
if you try to store 1.5 million rows in an array you probably get worse performance and too large PGA or the problem you are having, out of memory
That was really awsome, 1.5 Million rows got updated in 12 Min.
But i have one question, if i set the LIMIT, and if the cursor is not able for find the LIMIT rows inside a loop then it would exit out of the loop and that many records would not be updated, so how do i take care of those records.
Thanks a lot once again.
Hmm are you sure? I rememeber when I used it last year it fetches everything even the number of rows in the last fetch is not 100
Yes, if the last fetch is less then 100 it close the loop. Also if i am just using arrays and not using any CURSOR(select statement), how to i set LIMIT clause.As you can see in the code below i am not using andy FETCH clause.
For example i have a code this.
p_cust_id_array IN NUM_VARRAY,
p_words_array IN CHAR_VARRAY,
p_cat_id_array IN NUM_VARRAY,
p_no_of_occurence_array IN NUM_VARRAY,
p_error_code OUT NUMBER,
p_error_text OUT VARCHAR2
p_error_code := 0;
if p_cust_id_array is null OR p_words_array is null
OR p_cat_id_array is null OR p_no_of_occurence_array is null then
p_error_code := 1;
p_error_text := 'Could not insert KR_DATA '|| SQLERRM;
-- 1. loop through varrays and insert into kr_data table..
if p_cust_id_array.FIRST is not null then
FORALL l_idx IN p_cust_id_array.FIRST..p_cust_id_array.LAST
insert into cust_data_temp
p_error_code := 2;
p_error_text := 'input size zero ' || SQLERRM;
WHEN OTHERS THEN
p_error_code := -1;
p_error_text := 'could Not Add cust Datas ' || SQLERRM;
Is it necessary to commit inside the bulk fetch to avoid PGA error.
no, you dont have to commit
I have an forall BULK update, but the updates did not commit.
Earlier i used commit and i was able to see the updated info.
Now i am trying to run the same stored proc again without commit.
I am able to see the updated info in the session where in run the BULK update procedure without commit in the stored proc, but when i open a new session and try to query for updated info, it shows null value.
So, its a must that COMMIT be provided inside or outside the BULK update stored proc for changes to be seen over the entire database.
Please correct me if am wrong.
Click Here to Expand Forum to Full Width