DBAsupport.com Forums - Powered by vBulletin
Results 1 to 7 of 7

Thread: RMAN and Flashback

  1. #1
    Join Date
    Apr 2008
    Location
    Pennsylvania
    Posts
    15

    RMAN and Flashback

    We are on V 10.2.0.3 on Solaris 10.

    My dilemna:

    We were previously able to back up our production database to tape, via weekly cold backups using third party software. We did this every Sunday. Three things happened. Our database got way too big (3 TB), our backup window shrank, and we no longer have a viable tape backup device. We were never in archivelog mode, because the decision was made by management that since we have a data warehouse, simply recovering to the previous Sunday and re-running the nightly loads was less painful (and we can generate up to 350G of redo every week). We also have exports of everything that are kept fairly current.

    My glorious leaders have now challenged me to use RMAN and Flashback features, but there is a resource issue, and that leads to my question.

    We have a chunk of space that is 1.5TB available on a licensed, supported storage array apart from the rest of the database files. My plan is to make that space my recovery file destination, such that a copy of the online redo logs, archived logs, rman control file backups, and flashback logs will exist in that space. However, there will NOT be enough room for db backup sets or image copies. Remember, the db is currently 3 Tb and growing, and my guess is that we will use most of the 1.5 TB recovery area for the other files listed above.

    We DO have access to another storage array (a SAN) with 2TB available, but it does not have a support contract. What we would like to do, if it is possible, is to create rman backup sets on the SAN and spool them off to tape in pieces...that is, we have 588 datafiles and 544 tablespaces (I inherited this mess) and the idea would be to take umpteen tablespace backups each night using the SAN as a "mock" tape device, and then back that up to a real tape library, which is unfortunately old and slow. Management is firm on not spending $$$ on more storage so that we could hold everything necessary in one place. Is it possible to do these backups in a way that would have all the flashback stuff on one array and the backup sets on another? With 2 TB available on the SAN, it is possible we could even hold a full backup set (Oracle's compression ratio is supposed to be between 2 and 4) out there.

    Anyone who has a creative idea on how to solve this problem will have my lasting gratitude. I have not used RMAN for years, and this version is SOOO different. We also want to use the flashback recovery features with RMAN. I am going through a steep learning curve, but could use some help.

  2. #2
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    what is your objective, sorry but after reading the whole message I am not sure what is your goal

    Do you want faster backups because your backup window is not enoguh anymore?

    Considering the size of your database you should consider incremental backups, better still in 10g you can use block change tracking, a RMAN feature to even get faster incremental backups. This however introduces some small overhead so is Flashback. Flashback in fact introduces quite large overhead.

    Using incremental backups you need less space to store your backups, less time to perform your backups.
    Last edited by pando; 05-01-2008 at 07:37 PM.

  3. #3
    Join Date
    Nov 2007
    Posts
    4
    >> Is it possible to do these backups in a way that would have all the flashback >> stuff on one array and the backup sets on another?

    Yes, definitely. I believe you can put the flashback logs in the flash recovery area, but you can have RMAN write the backup sets to any arbitrary path, so they could be written to the other storage array.

    You also mentioned that you have 544 tablespaces and that you are a data warehouse. Does that mean you have a lot of range-partitioned tables that are loaded once and not updated? If so, make those tablespaces read-only then back them up with RMAN. Then they won't affect your backup time, because RMAN won't need to back them up.

    Incremental backups are definitely a winner, make sure you use the block change tracking capability.

    The combination of incremental backups, compressed backup sets, backup optimization (see the documentation on that one), and read-only optimizations should reduce the footprint of your backups on your operations.

    One other thing, I would be wary of putting the online redo logs, archived redo logs, and flashback logs on the same device/disk/volume. You could be setting yourself up for some I/O contention issues.

  4. #4
    Join Date
    Apr 2008
    Location
    Pennsylvania
    Posts
    15
    Thanks to both of you for your replies. Pando, the goal is not so much to reduce the backup window; instead, we would like to have pieces of our database being backed up daily, such that everything is current to within one week. The various docs I have read seem to suggest that if I am using RMAN without setting up a flashback area (FRA), then I could just do traditional RMAN backups to a tape or disk location. However, the docs also seem to suggest that if you are using RMAN AND Flashback Features, all the files should be in one location (we cannot do that due to lack of space). I plucked the following quote from an Ebook by Mark Johnson:
    "Typically, you should store in the FRA (Flash Recovery Area) one copy of your control file, one member of each on-line redo log group, and one copy of each archived redo log file. If you are using RMAN, then you should also set the backup destination to be the FRA: a full image copy of the database as well as any full and incremental backup sets should be stored here". I have found similar statements in Oracle notes. That is why I asked if it was possible to backup the database files to a place OTHER THAN the FRA.

    Linux, you are correct that we have a bunch (over 20) large partitioned tables in our warehouse. Most are monthly with data going back as far as 1998, and a handful of others are daily partitions (60 to 90 day retentions) and a few are actually yearly partitions (just enough data to warrant partitions but not monthlies, also they are queried within a specific year). Our biggest table has 1.2 Billion rows, most of the others over 200 million, and the rest between 60 and 200 million. They NEVER archive off any of the monthly partitions, just the dailies.

    This db is a hybrid Data Warehouse/DSS/low oltp database. Your point on making some of the partitioned tablespaces read-only has been vigorously discussed. We will likely do that with SOME of the tablespaces, maybe up to one third or one half. Our problem is that the business keeps re-inventing history.....that is, they come to us one day and tell us they want to update "n" number of rows in one of these tables because the original data received (from outside vendors) was incorrect. Sometimes, these update go back years. So we would have to go back and do all new backups just for those, and managing the entire mess could become daunting. Another thing that happens is that sometimes our largest tables are re-invented entirely. We have one table that is going to have the number of columns DOUBLED in the near future. It is our daily account history table and has 90 days of partitions online..last time I looked, it has 96 Million rows, over about 1.2M per partition. My boss wants me to consider redesigning it to be a mini "star schema" within the database, with one large fact table and a number of dimensions! So you can see, this database is in constant flux, and it would likely be difficult to keep all the backups straight to enable a quick recovery. I am the only DBA in this shop, with one other in training. So whatever solution I come up with needs to be as simple and efficient as possible, as I still have a lot of operational responsibilities and only so much bandwidth.

    Thanks again for your responses.

  5. #5
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    Well to keep things simple I would just

    1. use incremental backups, a level 0 (complete) backup per week, monday to saturday level 1 differential incremental backups and with block change tracking if possible

    2. skip read only tablespaces, something like

    backup database skip readonly plus archivelog delete input;

    if you need to put the tablespace read write again just make sure dont put it to read only again before you perform the backup for the tablespace and if you have major application changes just perform a full backup (get rid of skip readonly clause from your backup command for a few days)

    3.Dont worry about FRA, you can store your backups anywhere you want

  6. #6
    Join Date
    Nov 2007
    Posts
    4
    Maybe you should go to 11g. I understand you can back up pieces of a bigfile tablespace (heh heh).

  7. #7
    Join Date
    Apr 2008
    Location
    Pennsylvania
    Posts
    15
    Pando, great reply, that was what I needed to hear.

    Linux, if I even suggested that they go to 11g right now, I would be dragged out and shot. We are still recovering from the upgrade from 9.2.0.3 to 10.2.0.3. We were held back by our Unix Ops people, who took the last three years planning and executing their Solaris upgrade from V8 to V10. (10.2 is not supported on Solaris 8). So now we are trying to catch up.

    I have read that 11g RMAN compression is superior to 10g, so it may be in our future sooner not later. BTW, the file size limit on our Solaris boxes appears to be 64G, which is just fine with me.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width