-
sql loader - how to ignore inserting duplicate records
Hi,
Im calling sql loader recursively to load data from CSV files which has thousands of records in each file. If there are any duplicate records, the sql loader terminates with ORA00001. My query is how to ignore inserting duplicate records and continue with the load.
Most of the posts in forums suggests to use skip command. But i do not think that is a wise option in my case as we cannot predict the maximum error count. more over I have set up ERROR=0 in my code so that the code terminates in case thers is a data error.
Please let me if there is are any other way to ignore inserting duplicate records into the tables.
Thanks in advance.
-
why not clean up your data so it is correct
-
As per the requirement we are not supposed to do that.
-
why are you not supposed to make sure your data is correct?
-
Then just load all the data and de-dupe the table. No brainer.
-
Originally Posted by anijan
My query is how to ignore inserting duplicate records and continue with the load.
Most of the posts in forums suggests to use skip command. But i do not think that is a wise option in my case as we cannot predict the maximum error count.
That's the conundrum... you cannot have it both ways.
Either you accept dups or you don't
Pablo (Paul) Berzukov
Author of Understanding Database Administration available at amazon and other bookstores.
Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.
-
Though ERROR parameter can used to avoid such errors, is there a way seperate data errors like invalid date or character etc and duplicate records.?
-
Like you said, there is no way to predict the maximum number of errors. Read the documentation and use the bad and discard files. Your overall choices are:
1. Fix the data ahead of time
2. Load all the data and de-dupe the table
3. Pick a large enough value for errors
How many times does this need to be explained?
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|