-
log switch in a fixed interval of time
Hi,
We have oracle9i RAC DB with two nodes and we have two threads of redolog groups (5groups in each thread) and one member in each group. The size of the online redo log file is 100m. We have normal transactions and batch transactions in our DB, the log switch occurs frequenty in batch operation (min 40sec and max 2mins) and in normal transactions the log switch occurs not frequently as like batch process(min 2minutes and max 2/3 hours ie. as and when the online redo 90% full).
Now we are in the process of implementing a standby DB in a remote site. To avoid the data loss in the huge gap of log generation, we need to set the DB to generated redo logs in a regular interval of time, say 30min.
Can any one help in this to achieve the above.
Thanks in advance.
Rgds
Ramesh
Ramesh V
-
Hi Ramesh
You can setup a cronjob that logs into your database and runs a logswitch at 30 minute interval .
'alter system switch logfile'
regards
Hrishy
-
You need to increase the redo log file size to higher value so that the log switch occurs 4 or 5 min interval.
All you need is configuring Data Guard for the standby database with maximum protection mode.
See below oracle manual:
PHP Code:
5.7.1 Maximum Protection
Maximum protection mode offers the highest level of data availability for the primary database. When used with force logging, this protection mode guarantees all data that has been committed on the primary database will be available for recovery on the standby site in the event of a failure. Also, if the last participating standby database becomes unavailable, processing automatically halts on the primary database as well. This ensures that no transactions are lost when the primary database loses contact with all of its standby databases.
--------------------------------------------------------------------------------
Note:
Oracle Corporation recommends that you use multiple standby databases when your business requires maximum data protection. With multiple standby databases, if one standby database becomes unavailable, the primary database can continue operations as long as at least one standby database is participating in the configuration.
--------------------------------------------------------------------------------
When operating in maximum protection mode, the log writer process (LGWR) transmits redo records from the primary database to the standby database, and a transaction is not committed on the primary database until it has been confirmed that the transaction data is available on at least one standby database. While this can potentially decrease primary database performance, it provides the highest degree of data protection at the standby site. The impact on performance can be minimized by configuring a network with sufficient throughput for peak transaction load and with low row trip latency. Stock exchanges, currency exchanges, and financial institutions are examples of businesses that require maximum protection.
Issue the following SQL statement on the primary database to define this level of protection for the overall Data Guard configuration:
ALTER DATABASE SET STANDBY TO MAXIMIZE PROTECTION;
Tamil
Last edited by tamilselvan; 05-21-2005 at 09:18 AM.
-
Switching every 30 mins would imply that 30 mins data loss is acceptable in case of disaster. If that is the case, maximum protection mode is over-kill.
Can anyone here describe their experience of the performance hit of running maximum protection mode over a WAN?
When I set up my standby, the network guru expressed a preference of transmitting many small files rather than few big ones. That would argue against increasing the log size. (I do as hrishy suggested, but with an interval of 10 minutes.)
"The power of instruction is seldom of much efficacy except in those happy dispositions where it is almost superfluous" - Gibbon, quoted by R.P.Feynman
-
Originally posted by DaPi
Can anyone here describe their experience of the performance hit of running maximum protection mode over a WAN?
Don't even think about it.
-
Originally posted by Axr2
Don't even think about it.
I didn't want to be accused of being negative
"The power of instruction is seldom of much efficacy except in those happy dispositions where it is almost superfluous" - Gibbon, quoted by R.P.Feynman
-
Originally posted by DaPi
Switching every 30 mins would imply that 30 mins data loss is acceptable in case of disaster. If that is the case, maximum protection mode is over-kill.
Can anyone here describe their experience of the performance hit of running maximum protection mode over a WAN?
I work with finacial exchanges and even here we dont use maximum protection mode at all..
we use either of maximum availability or maximum performance.
Maximum availability is our preferred choice and almost 100% of our oracle databases use this mode.
Offcourse we have veritas cluster and good old Tru64 :-)
regards
Hrishy
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|