DBAsupport.com Forums - Powered by vBulletin
Results 1 to 9 of 9
  1. #1
    Join Date
    Sep 2011

    How to automate 11.2 RMAN duplicate process

    I am able to use a shell script to duplicate target (host 1) to auxiliary (host 2) from target backup done via TDPO channel.

    In my shop, currently, there are automated cloneing process done via recreating control file and restore/clone from user managed TSM backup.

    The dev db cloneing process happens every morning to clone from latest Prod backup.

    How do I specify in the RMAN duplicate run block so that the duplicate process automatically picks up the latest TARGET database RMAN backup?

    Welcome to any advise!!!!


  2. #2
    Join Date
    Jul 2006
    I had much success using the active clone feature, which was introduced. What this does is allow you clone your DB when it's in use instead of copying
    it from a backup..

    There are a few things you need to watch out for before you take this

    1) Ensure you have all your objects in logging mode (nothing should be in
    NO LOGGING). If transaction are applied against an object during the cloning
    and the object is in NO LOGGING mode there will be no records of these
    transactions in the .arc logs. Once the clone is complete, it will probably
    be invalid because it was missing those transactions.

    2) As mentioned above, the active clone uses .arc logs to apply against
    your cloned DB to bring it up to date. If the the .arc logs are backed up and
    removed than you will need to do a manual recovery on your DB.

    Below is my code for active clone.

    script on target system

    cat active_clone.ksh

    sqlplus ' / as sysdba ' < shutdown abort
    startup nomount

    rman catalog=rman/rman@prod1 target=sys/sys@prod << EOT
    connect auxiliary sys/isys1@dev
    duplicate target database to dev from active database nofilenamecheck
    pfile=/u01/app/oracle/product/11gR2/dbs/initdev.ora ;

    re-create temporary tablespace....

    Do any post refresh steps

    About how to automate the clone I typically use point in time recovery
    for my larger DB's

    sqlplus ' / as sysdba ' < startup nomount pfile=/u01/app/oracle/product/11gR2/dbs/initdev.ora

    rman catalog=rman/rman@prod target=sys/sys@prod < connect auxiliary sys/sys

    duplicate target database to dev nofilenamecheck
    until time="TO_DATE('09/24/11,06:58:59','MM/DD/YY,HH24:MI:SS')";

    But I too am trying to find out what .arc files are needed so they can be
    applied. I guess I can leave all the .arc files around to after the clone but
    I do risk filling up my file system.. If you figure out how to do this please

    Good luck with the active clone. I would kick it off during the lowest load time on your DB
    Last edited by BeefStu; 10-05-2011 at 02:57 PM.

  3. #3
    Join Date
    Sep 2011



    Thank you for the insight!

    Active Dup. might be a good idea when the prod is not constraint on batch processing.
    But I wonder, does RMAN dup. or backup use SQLNet? If so, there will be some bottleneck?
    As far as arc logs, if you are using SAN/DAD for backup storage, perhaps a NFS mount be helpful to all arch. logs that is needed for active dup.

    3. PSU fixed the BCT bug for duplication. for AIX will be out this month.
    4. If I have to use backup based duplication, if on day 1 at 3am Prod rman backup completes, at 7am an automated backup-based duplication to dev is to happen, how would I automate the until time instead of hard-code to_date in it?


  4. #4
    Join Date
    Jul 2006

    Point 1) yes we schedule our active clone at the slowest time

    Point 2) yes, it has to use the network to transfer the files

    Point 3) Still waiting on the bug fix for my version

    Point 4)

    SQL for Last backup time:

    # From DB that is being backed up
    Using controlfile - select newest_backup_time from v$BACKUP_SET_SUMMARY

    # From DB that contains your catalog
    Using catalog - select db_name, NEWEST_BACKUP_TIME from rman.RC_BACKUP_SET_SUMMARY where DB_NAME = '???????';

    BTW, I am trying to automate the same exact process but I keep getting
    side-tracked. Ugggg If I get it done before you I will be more than
    happy to share my solution. Hopefully, you can do the same.

    Good luck

  5. #5
    Join Date
    Sep 2011

    Lightbulb A new link I found

    Hi Beefstu,

    I found a link that details how to automate duplication (backup-based):


    It might now be helpful if you still have to dup. from active db.
    Another "bad" idea, what if you mirror the Prod archive log to 2 locations:
    arch_dest_1 is on your prod host and can be backed up and deleted,
    arch_dest_2 is on a UNC drive -- that do not get deleted?

    I am still fishing around and will let u know if I find a nut or two.

    Thanks a LOT for your HELP!!!


  6. #6
    Join Date
    Jul 2006

    Thanks for link!! Great find.

    At first glance I see the user is setting
    the variable NOW to before the backup and than using it in the until time
    clause for the restore. I am thinking the NOW variable should be set after the backup is complete?

    Secondly, the backup should contain something like this:

    sql "alter system archive log current";

    As it will force the system to write out all the dirty blocks into the datafiles.
    Last edited by BeefStu; 10-07-2011 at 08:54 AM.

  7. #7
    Join Date
    Sep 2011

    Error on automated dup. using RC_BACKUP_SET_SUMMARY.newest_backup_time

    Hi Beefstu!

    Still need help on WHICH field to use in order to derive the accurate "until time" value for autoamted duplication.

    I've tried using the options below to automate duplicate, but duplication each time aborted with error:

    RMAN-05501: aborting duplication of target database
    RMAN-06457: UNTIL SCN (5877516207484) is ahead of last SCN in archived logs (5877516207437)

    select the MAX of RC_BACKUP_SET_SUMMARY.newest_backup_time

    select max(to_char(completion_time,'YYYY-MM-DD:hh24:mi:ss'))
    from RMAN.rc_backup_datafile where db_name='&DB';



  8. #8
    Join Date
    Jul 2006
    Keep in mind this is a manual process for me too so my suggestions will
    require testing..

    First post your backup and clone scripts.

    On the backup script I would suggest the following:

    1. After the backup completes, lets have the script sleep 5-10 seconds.

    2. Pull the DB time at this point and print it as official backup end time
    (under heavy load the OS time and DB time can be different)

    3. Do a second log switch (this will make sure that the official backup end time is contained in the archive log)

    Lastly play around with the times and columns to see, which is the correct
    one. I would have thought it was select db_name, NEWEST_BACKUP_TIME from rman.RC_BACKUP_SET_SUMMARY where DB_NAME = '???????';

  9. #9
    Join Date
    Sep 2011

    Red face Found a new way

    Thanks Beefstu!

    I went through all kinds of max time and still got the same issue.

    I thought, rather than using unitl time, I could do the script in steps like this:
    Although they are all DONE on the auxiliary host:
    1. Do a backup archivelog all (of the target db arch logs)
    2. Grep the max sequence# from rc_backup_archived_log_details
    3. Print max sequence#+1 as the next_sequence (to be used in the duplicate unitl sequence ..... step)
    4. drop auxiliary db
    5. rm all files just in case they are lingering
    6. startup w/ initaux.ora (only contains db_name)
    7. run the duplicate command
    8. set db to noarchivelog mode.

    However, these are all done via code blocks in a shell scripts. I haven't figured out how to do a exception handler yet. Currently, I have a sleep 60 between each code block. Might not be a ideal way to script things.

    I guess, the key point that I am not certain is, there is always many ways to skin a kitty, but is my way correct?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts

Click Here to Expand Forum to Full Width

We have made updates to our Privacy Policy to reflect the implementation of the General Data Protection Regulation.