DBAsupport.com Forums - Powered by vBulletin
Page 3 of 3 FirstFirst 123
Results 21 to 30 of 30

Thread: Question to Imporove PL/SQL Performance

  1. #21
    Join Date
    Dec 2002
    Posts
    36
    =>Pando, You got it correct, the C program running on Dell pc is using arrays and generating tags of the id column and storing in the memory, I don't know how exactly it works.

    Ok here are questions from the sample code you mentioned :

    is the select bulk collect should be on the table where I open the cursor ?.

    if I am bulk selecting some recs then how should I find out which record need to be updated or to be inserted into the table2 using FORALL. Can I use begin exception end with FORALL ?.

    FORALL i IN X_OBJID_INTERACT_ARRAY.FIRST..X_OBJID_INTERACT_ARRAY.LAST
    DELETE FROM TABLE_INTERACT_TXT
    WHERE INTERACT_TXT2INTERACT = X_OBJID_INTERACT_ARRAY(i);
    fossil

  2. #22
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    that's why I said you probably have to look your logic, I mean if PRO*C is working with arrays then most probably you dont have to process row by row but of course I dont know how PRO*C is doing the checking as you do in PL/SQL

  3. #23
    Join Date
    Dec 2002
    Posts
    36
    Hey Guys, I am back. I hope everyone is doing great.

    I was awake until 1.00 am thinking of the problem. I went over couple of time on all the suggestions provided by you guys. I may be wrong, but one common suggestion coming out is "collections" and "bulk" concept, however, I couldn't understand how to use the "bulk collect". May be I am use it instead of the cursor on the txn table and fetch the records in the batch of 10,000 recs.

    To avoid submitting select to table2 and table3 each time record comes from txn table, create 2 pl/sql tables and populate them from the key columns values of table2 and table3. By this I wouldn't be submitting the select to the database and I may get some benefit.

    Above is my hypothetical assumption that it will run better than the selects to the database.

    Do you guys agree to this, please let me know if you feel I wouldn't benefit from this if I will have to loop 30 million key values in to find a match in the pl/sql table ?


    Please write your comments. Meanwhile I an preparing to write the code for this.


    Thanks everyone for sharing your valuable thoughts on this.



    thanks,
    fossil

  4. #24
    Join Date
    Aug 2002
    Location
    Colorado Springs
    Posts
    5,253
    You might try going at this in stages -- compare the current method of selecting from the cursor (and doing no other steps, just selecting the whole set) with selecting using bulk collect .

    Then build in the inserts, then updates, and just compare raw speeds without your PL/SQL business logic.

    I am positive that the way to increasing speed of your process is in bulk operations, and the effort will be well worthwhile.
    David Aldridge,
    "The Oracle Sponge"

    Senior Manager, Business Intelligence Development
    XM Satellite Radio
    Washington, DC

    Oracle ACE

  5. #25
    Join Date
    Nov 2002
    Location
    Geneva Switzerland
    Posts
    3,142
    I agree with slimdave about building it in stages.

    One additional suggestion - when you SELECT the row that will be updated, include the ROWID in the column list, then use this for the WHERE in the UPDATE statement. Could save a bit of time (even when everything is in memory).
    "The power of instruction is seldom of much efficacy except in those happy dispositions where it is almost superfluous" - Gibbon, quoted by R.P.Feynman

  6. #26
    Join Date
    Dec 2002
    Posts
    36
    => To Everyone participating in the discussion.

    Hi guys,

    I am back and back to square too.

    What I haven't implemented is trying with rowid in the update as suggested by DaPi ?.

    Yesterday, I bulk collected key column values into pl/sql table and used the pl/sql table as a look up for availability of the record. Test conducted on 4 million key values.


    I had to loop thru it using


    FOR i in TabSiteId.first .. TabSiteId.Last
    LOOP
    dbms_output.putline(to_char(sysdate,'dd/mm hh24:mi:ss');
    IF TabSiteId(i) = 'AAAAAAA' THEN
    dbms_output.putline('found :'||to_char(sysdate,'dd/mm
    hh24:mi:ss');
    end if ;
    END LOOP


    I tried to search the key value existed in the end of the pl/sql table and It took 3 minutes to search for it.

    Oh Man... what an improvement.

    selecting from the table using index is no doubt faster than above, even if pl/sql procedure is switching back and forth between pl/sql engine and sql engine.

    I think this search would not even benefit by using binary search method because search list needs to be sorted.


    Please let me know if you guys disagree with this there is something wrong in the test ?.

    Can I use "select where " on a pl/sql table ??.

    OK one point I am missing is bulk collection of the key column into the pl/sql takes around 5 minutes for 4 million records.


    Another observation, I think you guys already know that.


    In Oracle 9.0.1, I could not bulk collect into a pl/sql table of record having more than 1 column. This feature is availabe in 9i release 2.

    TYPE RecId RECORD (id1 char(5), id2 char(4));

    r_id RecId;

    TYPE TabId table of id%type ;

    t_id Tabid ;


    SELECT id1, id2 bulk collect into t_id.id1, t_id.id2 from table1 ;

    some error "Wrong list of values"



    Untill I migrate my 9.0.1 to release 2 I can't do the test of doing bulk collect with multiple columns and then bulk insert and bulk update.


    What next now ????

    Is it easy to migrate to release 2 ???.

    Please write your comments.


    Thanks,
    fossil

  7. #27
    Join Date
    Aug 2002
    Location
    Colorado Springs
    Posts
    5,253
    I believe that you will have to bulk collect into multiple tables. Here is an example quoted in the 9.0.1 PL/SQL Users Guide and Reference (gentle hint, there) ...

    DECLARE
    TYPE NameList IS TABLE OF emp.ename%TYPE;
    TYPE SalList IS TABLE OF emp.sal%TYPE;
    CURSOR c1 IS SELECT ename, sal FROM emp WHERE sal > 1000;
    names NameList;
    sals SalList;
    BEGIN
    OPEN c1;
    FETCH c1 BULK COLLECT INTO names, sals;
    ...
    END;

    One TYPE declaration per column, one PL/SQL table per column. So it's not that difficult that you need to consider the upgrade, it's just rather more "wordy" to code than in 9.2
    David Aldridge,
    "The Oracle Sponge"

    Senior Manager, Business Intelligence Development
    XM Satellite Radio
    Washington, DC

    Oracle ACE

  8. #28
    Join Date
    Dec 2002
    Posts
    36
    Hi SlimDave, thanks for your reply. I was kind of struggling with some other interesting comparison and didn't go further with the pl/sql testing, will be back with the results for you guys.
    fossil

  9. #29
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    You can also consider/evaluate "IN-LINE" view update methods. This is faster than rowid update, provided the values are coming from another table.
    An example:

    UPDATE ( select e.sal e_sal, m.sal m_sal
    from emp e , temp m
    where e.emplid = m.emplid)
    set e_sal = m_sal;

    The above update will work when emp and temp tables have PK.

  10. #30
    Join Date
    Dec 2002
    Posts
    36
    Hello Friends,

    I hope everyone is doing great. For the last couple of days I was working on the problem we discussed here. Using the suggestions and help, I re-wrote the whole logic using bulk collect, forall update, delete and extensively used index by tables. The whole program code is 40 pages long. Now the time has reduced to minutes from hours.
    By all your wishes the results are correct. Thanks very much for your great help and advices.



    Now I have been asked to run the program on 50 million rows. Row Length is 700 bytes. Out of 50 million, around 20 mill rows will be updated. The select and update are running on same table.
    This has raised a new challanges here to how to handle the big update.

    How to avoid the snap shot too old error ?

    Is there any limit to size of index by tables ?

    Should I do frequent updates but don't commit ?.

    If I do frequent updates then how to handle big rollback,

    Should I do frequent commits ?.

    If I do frequent commits, then how to avoid the snap shot too old error ?

    Please excuse me if i have written something dumb here.


    I will really appreciate any help on this issue.


    Thanks very much.
    fossil

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width