8i physical standby : Managed recovery across WAN - painless??
I've got a customer on 8i (700Gb) that wants a standby on a remote location (across the WAN..far far away).
While I've implemented several standbys across the WAN on 9i (with managed recovery and LGWR XptMode), and earlier versions (8i, 7.3.4) by manually transporting logs via scripts; I have never used "managed recovery on a 8i database” (with a default ARCH Xptmode; since there’s no concept of SRLs and LGWR Xptmodes in 8i). I was wondering if anyone could share their experience how well this performs - on a somewhat active (100Meg/min) reasonably big database across a flaky WAN. While 9i handles managed recovery pretty well, I don't know about 8i.
How well does 8i handle network interrupts? Say the RFS fails, does it automatically retry transmitting logs that failed after a certain time (reopen clause)? Or does it involve taking care of it manually (copying the failed logs), and restarting the "managed recovery" everytime?
I’m aware of how 8i managed recovery is “supposed to work” on paper..I’m looking for opinions from folks who manage a standby environment (8i) across the WAN. How painless is 8i managed recovery?
Is there a concept of GAP resolution (or something akin to FAL) in 8i at all?? From all the documentation that I've read, there is nothing that takes care of GAP resolution in 8i. What was the point in introducing "managed recovery" in 8i? It was a 'half-baked' feature..
Originally posted by Axr2 controlfile_record_keep_time is set to 30.
Based on the last month's v$log_history, min redo generated 23G, max=75G. Avg=40G.
Keep in mind - WAN. Over 4000miles.
PS : In your case, do you enable data compression?
no compression, it sort of just works out for us, we run rac with each node doing about 7 or 8gb and it taking 45 mins to push a 250mg log. We may not always be within 1 log on the standby but by the end of the day we are all caught up
Originally posted by Axr2 45mts for 250M. Seems a tad faster than 56Kbps! WAN again? I assume you have a ton (ton>10) of redolog groups. Surprising that you don't encounter too many network issues on such low bandwidth..
10 groups each, i love the gap resolution, we get a network hiccup every other day that drops a log which Oracle fixes next time it generates a new arch log