Large database backup
DBAsupport.com Forums - Powered by vBulletin
Results 1 to 6 of 6

Thread: Large database backup

  1. #1
    Join Date
    May 2016
    Posts
    3

    Large database backup

    We have a somewhat large database server (about 8TB across 5 databases) with very high activity (125 log switches per hour per database). We are looking to reduce our recovery time and the large number of archive logs that would need to be applied takes too much time. What is the best solution for quickly recovering large Oracle databases with high activity (CDP solutions? Disk array snapshots? Oracle Flashback? Oracle Data Gard?) Are there other types of solutions?
    Last edited by chainsaw; 05-27-2016 at 01:46 PM.

  2. #2
    Join Date
    Nov 2000
    Location
    Pittsburgh, PA
    Posts
    4,157
    Are you saying that you have 2,000-3,000 log switches spread across 5 databases? So if I take the 3,000 number and spread it evenly across all 5 databases that could be 600 log switches per hour per database or 10 log switches per minute. Although its seems likely to me that one or two databases account for most of the log switches. I would look at the problem from a couple different directions. First how many log groups do you have per database? Are you seeing the database wait because it needs to free a database to keep adding redo to the online redo logs? If so you can both increase the size of all of your online redo logs and add additional online redo log groups. You want to makes sure that the database has already archived each online redo well before it is ready to truncate the file and start using it. IMHO, you can always add log groups if you feel like you are seeing the database wait to archive the next log.

    The next part for me is the question of how to speed mean time to recovery. Given that your databases are so large, I am going to assume that you use enterprise edition. Hence you should have a block change tracking file, and you should be doing weekly level 0 incremental backups and daily level 1 incremental backups. Depending on the load on the server, you can run several incremental backups during the day. Doing this will mean that if you need to restore your database, most of the heavy lifting will involve the restore process, and less work will need to be done during the recovery process. I assume also that you are doing your backups in parallel, since I also assumed that you are using enterprise edition and that you likely have really good hardware.

    You might be better off spreading the databases across different servers and virtualizing the servers so that if you lost one host you can have the databases on that host move over to another host with little trouble. You can probably have one VM per host if you want, or even 2-3 vm's per host.

    You can also set up a separate VM cluster for data guard stand by databases. Which you can use as a part of your backup process if you want. The more hardware that you spread your infrastructure across, then the less likely that the loss of one machine is going to affect your company.
    this space intentionally left blank

  3. #3
    Join Date
    May 2016
    Posts
    3
    The log switches are actually 125/Hr (2000-3000 logs switches per day). I have corrected my original post. We are using enterprise edition. We do a daily full backup. Incrementals during the day might be possible, but the server runs batch jobs for most of the day (18-20 hours) that have a very high change rate (60% change rate daily). Backups are done in parallel (Sun M9000 with 80 cores and 640GB RAM)

    We are looking to improve the restore / recovery time that we see today for the whole database while still being able to do GLR. I'm really looking for high-level approaches to backing up an environment like this. In designing the solution, we would look at a worst-case scenario of a full restore of all data. We are trying to improve the RTO beyond what we are doing today with an RMAN hot backup which takes about 6 hours to lay down the database and 25 hours to restore and recover 2 days worth of archive logs.

  4. #4
    Join Date
    Nov 2000
    Location
    Pittsburgh, PA
    Posts
    4,157
    Why do you need to "recover 2 days worth of archive logs" if you do a full
    backup every day? Can you do the following and post the results?

    [CODE]
    rman target=/
    SHOW ALL;
    LIST BACKUP;
    [\CODE]

    You should only need to apply redo from the start of the rman backup up until
    the last archive log, or the last archive log that you want to restore. If you backup
    some but not all table spaces every day, then you would be restoring different
    backups and apply redo to fill in the gaps.

    It would also be helpful it you posted your backup scripts.
    this space intentionally left blank

  5. #5
    Join Date
    May 2016
    Posts
    3
    We do a full backup daily, but need to account for the possibility of a failed or skipped backup.

  6. #6
    Join Date
    Nov 2000
    Location
    Pittsburgh, PA
    Posts
    4,157
    You can set each archive log to be backed up more than once, so each archive log could get backed up 2-3 times. You can use a standby database, possibly with dataguard to keep a copy of your database up to date with the primary database. You can purchase active dataguard so that the standby database can apply archive logs while being open for read only queries. If you have a good backup/recovery scheme and if you test it often, then you should not find yourself applying multiple days worth of archive logs if you every need to restore a database.
    this space intentionally left blank

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width