-
Excessive redo?
We wanted some estimates for redo generated based on how much raw data will be processed. As far as I can tell there was no general rule of thumb, so I started to run some tests.
I have a table that's a little under 300MB. I created a test table and did an insert as select *.
For a table with nothing on in (no indexes, etc), redo per data was close to 1:1.
I added a primary key, and it was up to 4.6:1
I added a snapshot log, and it was almost 11:1.
In another set of tables in one of our defined replication groups (this particular group is 8 tables with primary keys, 7 foreign keys, and replication turned on), I loaded a nominal set of data (4.7 MB) and the redo generated was 123.7 MB for over 26 times the amount of redo per data.
Do these numbers seem excessive? I can't really say I ever looked into this much so I don't know if they are resonable or not. I know indexes and various replication options will increase redo, but I didn't expect it to be this high.
-
Should I take that as an "I don't know"?
-
If anyone was curious, there's an exponential increase in redo as indexes are added to a table, which is why the numbers are so high.
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|