Configuring TDP with a Oracle 10 RAC cluster
Hi there folks..
1st time poster, so please be gentle!
We've come across an issue on a project that I'm involved in, and I'm hoping someone here can help, as I'm WAY out of my league.
I'm on the TSM admin side, but we're working with the DBA's to get this project off the ground.
Essentially, we have a two node cluster containing some financial data, that we wish to back up via TSM.
We currently use TDP for our RMAN backups, as our existing clusters all use the standard 'Passive-Active' setup. The new cluster will be using RAC, so 'Active-Active'.
The data owner in this case wants to have a central 'config file' that's called by the scheduler running on each box.
The file will determine which node in the cluster is to be the 'Master' and hence the node that will run the backup. If for any reason we lose one of the nodes, we can change the file to reflect this.
As we've never tried to use TDP with a RAC setup, I wondering if there was a basic 'run though' available.
I've searched the forum, but the search strings (TSM, TDP, RAC) are all too small!
Thanks in advance!
how do you determine what is the master and what is not in your file?
Why not have it always running on one node and if that one doesnt respond - use the other?
I dont think so the idea suggested to you is good.
The backups should be run irrespective of who the master is .
RMAN shoudl back it up and then contact tivoli to move it to tapes or manage the backups etc.
The info I've got to go on is as follows:
"This is our first implementation of a Real Application Cluster (RAC) system and with RAC both servers, *node1* and *node2*, are active. However, at any time, one of the servers may be down and what I would like to achieve is a situation where the backup will run on the surviving server without manual intervention on your part.
I am not sure what is possible at your end (TSM) but I would like the schedule to run on both servers with our backup script determining the "master node" from a configuration file. If the "master node" went down we would change the configuration file to make the surviving server the "master node". I hope this is possible!"
I'm also aware that he wants to hold a single dsm.opt, password file etc on shared storage, so the whole cluster is 'centrally managed'.
Is this the right way of doing things?
Obviously I'm not a DBA, so don't want to get involved in heavy Oracle conversations with the team when I don't understand the product enough.
I do not agree with your concept of Master node etc...(if you are comming from DB2 world its a different story..)
From rman the oracle backup utility when you say connect it will do a connect to the surviving node (offcourse if ther is a crash the clusterware would come into play and try to bring up the failed instance.)
Yes you can have tdp agent on the cluster ware and mange everyting as a single instance the only thing is that the archived files of the nodes should be seen by all the nodes.
Just make sure that archived log files (aka archived redo logs) are acessible to every node .
You dont have do anything special for the master node .
Last edited by hrishy; 01-12-2007 at 04:58 AM.
I'll try & knock some heads together.
Click Here to Expand Forum to Full Width