DBAsupport.com Forums - Powered by vBulletin
Results 1 to 6 of 6

Thread: Big Turn-off for DML Error Logging

  1. #1
    Join Date
    Nov 2006
    Posts
    13

    Big Turn-off for DML Error Logging

    I was really impressed with 10g's DML Error logging feature (where one can used bulk loads and have common errors go into an ERR$_ table without raising the error and rolling back the data). But then, when looking to use the feature in a real life situation, we ran into this statement from the Oracle DW Docs:

    "The following conditions cause the statement to fail and roll back without invoking the error logging capability: ...
    * Any direct-path INSERT operation (INSERT or MERGE) that raises a unique constraint or index violation"

    Boy, that stings. When loading millions of rows into a fact table, we NEED to use direct-path inserts and we also need to have local unique indexes and catch duplicates on those columns. So we can't use DML Error Logging in these cases - the statement will just rollback as usual.

    I'd like to get some expert thoughts on this - what workarounds are there for us to use this nifty new feature? Or, if we can't use a logging table, what are the best alternatives to handling unique constraints while still doing direct-path loads (insert /*+ append */). It seems PL/SQL doesn't give us a perfect option (even with forall, it's not direct path).

    Thanks.

  2. #2
    Join Date
    Jan 2001
    Posts
    2,828
    Hi

    Well yes thats a big turn off i agree..


    But in our shop we use a flat file to load the fact table and we use good ol sql loader so any unique constraint violations are taken care of by the errors equals parameter of the control file.


    How are your fact tables loaded ?


    regards
    Hrishy

  3. #3
    Join Date
    Aug 2002
    Location
    Colorado Springs
    Posts
    5,253
    If it's errors within the new data alone then it's pretty easy -- use an analytic function to detect the dupes and a multitable insert to write the dupes to a different table. Does error logging work with multitable insert?

    If the dupes could be from combining the new and old data then select from a union all that includes the data with which the new data could have dupes and do the same analytic function ju-ju, but tag the rows read from the target table so they can be filtered out without being reinserted.
    David Aldridge,
    "The Oracle Sponge"

    Senior Manager, Business Intelligence Development
    XM Satellite Radio
    Washington, DC

    Oracle ACE

  4. #4
    Join Date
    Sep 2003
    Location
    over the hill and through the woods
    Posts
    995
    Hmmm, flat file= external table.

    Why would anyone one want to use SQL loader these days? Also, if you were to load the flat file as an external table wouldn't be easier to get rid of any dupes before loading?
    Oracle it's not just a database it's a lifestyle!
    --------------
    BTW....You need to get a girlfriend who's last name isn't .jpg

  5. #5
    Join Date
    Aug 2002
    Location
    Colorado Springs
    Posts
    5,253
    Quote Originally Posted by OracleDoc
    Hmmm, flat file= external table.

    Why would anyone one want to use SQL loader these days?
    One (purely theoretical) reason would be that an organisation that controls o/s level access won't allow you to place data files on the host system, so they have to be loaded from a client.

    just sayin'
    David Aldridge,
    "The Oracle Sponge"

    Senior Manager, Business Intelligence Development
    XM Satellite Radio
    Washington, DC

    Oracle ACE

  6. #6
    Join Date
    Nov 2006
    Posts
    13
    Quote Originally Posted by slimdave
    If it's errors within the new data alone then it's pretty easy -- use an analytic function to detect the dupes and a multitable insert to write the dupes to a different table. Does error logging work with multitable insert?

    If the dupes could be from combining the new and old data then select from a union all that includes the data with which the new data could have dupes and do the same analytic function ju-ju, but tag the rows read from the target table so they can be filtered out without being reinserted.
    These are not bad options, but I wonder what the performance hit would be - our goal would be to not try and do checks within the SQL or use the target table within a union all as I would guess there is a big performance hit there. Since we're comparing against the target table (well, target partition of the target table), we could be bringing in many millions of rows just to do the check - and most of the time, that check would come out clean (no dups).

    Oh, and no, we're not using sql*loader - external tables are our method. We can clean the dups in the specific external table, but it's dups from previous loads in the target fact table I'm concerned about (may not necessarily come from the same load).

    BTW, I have a forum gripe - I'm not getting any notifications from this forum - I have it on instant and my email address is right. Is it just poor ol' me?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width