DBAsupport.com Forums - Powered by vBulletin
Results 1 to 6 of 6

Thread: SQL Loader - process runs slow

  1. #1
    Join Date
    Mar 2001
    Location
    Reading, U.K
    Posts
    598

    SQL Loader - process runs slow

    Hi ALL,

    I have 40000 records to load from a file. I have around 200 columns.
    Of which atleast 20 columns have DATE conversion logic on them.

    At present this job takes 3 minutes if I dont give this parameter ROWS=10000 BINDSIZE=20000000 READSIZE=20000000.

    Now it take 52 seconds

    I am just pasting a part of the log file.

    Number to load: ALL
    Number to skip: 0
    Errors allowed: 50
    Bind array: 10000 rows, maximum of 20000000 bytes
    Continuation: none specified
    Path used: Conventional

    Table IPDR_3G_REL2_TMP, loaded from every logical record.
    Insert option in effect for this table: APPEND
    TRAILING NULLCOLS option in effect


    Table IPDR_3G_REL2_TMP:
    39962 Rows successfully loaded.
    0 Rows not loaded due to data errors.
    0 Rows not loaded because all WHEN clauses were failed.
    0 Rows not loaded because all fields were null.


    Space allocated for bind array: 19973898 bytes(742 rows)
    Read buffer bytes:20000000

    Total logical records skipped: 0
    Total logical records read: 39962
    Total logical records rejected: 0
    Total logical records discarded: 0

    Run began on Mon Nov 07 11:57:33 2005
    Run ended on Mon Nov 07 11:58:25 2005

    Elapsed time was: 00:00:51.90
    CPU time was: 00:00:23.32

    Thanks in advance.

    Cheers
    Cheers!
    OraKid.

  2. #2
    Join Date
    Mar 2001
    Location
    Reading, U.K
    Posts
    598
    Just to add to that

    If there are 234 records, still it takes 50 secs.

    How to improve this?

    Table IPDR_3G_REL2_TMP:
    234 Rows successfully loaded.
    0 Rows not loaded due to data errors.
    0 Rows not loaded because all WHEN clauses were failed.
    0 Rows not loaded because all fields were null.


    Space allocated for bind array: 19974640 bytes(742 rows)
    Read buffer bytes:20000000

    Total logical records skipped: 0
    Total logical records read: 234
    Total logical records rejected: 0
    Total logical records discarded: 0

    Run began on Mon Nov 07 11:54:10 2005
    Run ended on Mon Nov 07 11:54:59 2005

    Elapsed time was: 00:00:49.16
    CPU time was: 00:00:00.55

    Cheers
    Cheers!
    OraKid.

  3. #3
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    At present this job takes 3 minutes if I dont give this parameter ROWS=10000 BINDSIZE=20000000 READSIZE=20000000.
    It proves that larger value is not good in your system.
    Reduce the values and test it.

    If you want to improve perf, then split the data file into 4, and then load them concurently.

    Tamil

  4. #4
    Join Date
    Jul 2002
    Location
    Lake Worth, FL
    Posts
    1,492

    Cool


    ...Of which atleast 20 columns have DATE conversion logic on them.
    The above causes the load path to be "Conventional" instead of "Direct".
    Post an example of these date "conversion" logic.

    Also, use tamilselvan's suggestion of dividing the file in 4, except no need to physicaly divide the file, just use like this:
    Code:
    imp / control=myControl.ctl COUNT=10000 log=myLog1.log &
    imp / control=myControl.ctl SKIP=10000 COUNT=10000 log=myLog2.log &
    imp / control=myControl.ctl SKIP=20000 COUNT=10000 log=myLog3.log &
    imp / control=myControl.ctl SKIP=30000 log=myLog4.log &

    "The person who says it cannot be done should not interrupt the person doing it." --Chinese Proverb

  5. #5
    Join Date
    Mar 2001
    Location
    Reading, U.K
    Posts
    598
    Thanks for ur input guys.

    I am just wondering why we need to split the file when there is only 40000 records.

    Now I made the file options as direct=true. Now the job is running in 22 seconds.

    Just update if you have more suggestions

    Cheers
    Cheers!
    OraKid.

  6. #6
    Join Date
    May 2000
    Location
    ATLANTA, GA, USA
    Posts
    3,135
    May be you can examine this option in sqlldr:

    date_cache -- size (in entries) of date conversion cache (Default 1000)

    Tamil

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width