If it's true that you don't have to do anything else with this table (which honestly seems really doubtful -- what's the point of loading the data, identifying errors, then never looking for them or using the data again) then you don't need to analyze it.
If you mean that the error code would be updated by the first run for half the rows, and those rows that had an error code could then be ignored when the script was rerun, then i don't think it would make a difference.
In the absence of indexes a full scan would have to be performed anyway. You could add an index to the error code column, but even if the table was analyzed the cost-based optimizer ought to ignore it.
Does each row get updated with a code, whether or not an error is found? If so, what proportion of the rows do not generally have an error?