DBAsupport.com Forums - Powered by vBulletin
Results 1 to 3 of 3

Thread: a good Backup strategy Please, most urgent

  1. #1
    Join Date
    Apr 2002
    Posts
    291
    Hi Gurus,
    Oracle : 9.0.1
    OS : Linux 7.1
    Memory : 6 gigs
    I got 2 basic questions:

    1. We have newly set up our production server A, for which a good backup strategy has to find out. Database will run in archivelog mode. and the tape drive is attached to another prod server B. I kept the log file size as 10 megs, and based upon our business requirements, there are near about 1,00,000 records deletions,and same/more number of insertions too and found near about 65 logs are generated for every 1,00,000 deletions. I configured an NFS mount point between A and B for moving these archives. But the problem is when i was testing with deletions of 10,000 records per transaction, for every third delete (no commits in between), my log file size is varying. my log buffer size is around 16 megs. why is it so? but when i say commit for every 10,000 records, log size is constant. Why is it so???
    We are planning for cold backups,full exports also. So, every friday night, one full cold backup and daily a full export to tape. How to shift these backup and .dmp files to tape whcih is off the machine? and Please suggest me a good strategy. My DB size is around 70G . It's a prod server. our Management is not interested in Hot backups. Please suggest me a good strategy so that, we shouldn't loose much data if any failover occurs.


    2. RAM:6 Gigs, Linux 7.1

    How much should be the SGA??
    db_cache??
    log buffer?
    Is there any formula for these things??

    Machine is not accepting for more than 1.7 G for SGA. Its hanging above 1.7 G

    Please suggest me the appropriate values.

    Any help would be great help for me. It's comming live early next week.

    Thank in advance
    PNRDBA

  2. #2
    Join Date
    Dec 2001
    Location
    UK
    Posts
    1,684
    Niether of these questions are basic. Peoples jobs usually hang in the balance if they get these things wrong:

    Backup Issue
    =========
    Sounds like you need to go back to the documentation and read up on backup and recovery:

    http://otn.oracle.com/docs/products/...a96519/toc.htm

    I think a good starting point would be:

    Do a hot backup every night. If you've got too much data to do it in one go split your backups into managable chunks and do them on consecutive nights so that every "x" days you'll have a complete backup. Alternatively use incremental backups via RMAN.

    As you said, you should run in ARCHIVELOG mode. This will allow you to take your fuzzy hot backups and recover them to a consistent state.

    Keep as many archive logs in disk as possible as recovery will be faster. Make sure you have at least two copies on tape before you delete any.

    Run a regular export, nightly if possible. These logical backups are useful for minor repair jobs where a whole database recovery would be too disruptive and some data loss is acceptable. They are absolutely NOT a replacement for a decent backup unless the data is non-volatile.

    Test a number of strategies, Database Recovery, Datafile Recovery etc to check your backups are OK.

    SGA Issue
    =======

    The correct settings for your system depend on the amount of data in the DBA, the profile of the data use, the type of processing going on in the system, the use of bind variables etc. In short, you are never going to get an exact answer out of anyone.

    A rule of thumb is to keep the total size of the SGA below 50% of the physical memory of the machine. I would suggest the majority of the memory allocated to the database should be assigned to the db cache. Obviously, if you are only expecting 1 gig of data there isn't much point allocating 3 gig to the cache.

    The shared pool is another matter. This really depends on the amount of PL/SQL your system is using and the variety of SQL statements that are too be parsed by the system. During the testing phase you should be able to get an idea of this. Tom Kytes warns against the use of excessively large shared pools, especially where applications are not using bind variables since the cleanup of the shared pool can cuase major I/O spikes. Start at 50M and play around from their. The aim is to always have about 10% free. The hot backups will prevent the cache being lost at backup which makes life easier. Check this out for some ideas:

    http://asktom.oracle.com/pls/ask/f?p...0_P8_DISPLAYID,F4950_P8_CRITERIA:1550006372719,%7Bshared_pool_size%7D

    You haven't got long to get to grips with this so get your head down and do some reading. The backup issue should be your priority. Once this is sorted worry about performance issues.

    Cheers



    Tim...
    OCP DBA 7.3, 8, 8i, 9i, 10g, 11g
    OCA PL/SQL Developer
    Oracle ACE Director
    My website: oracle-base.com
    My blog: oracle-base.com/blog

  3. #3
    Join Date
    Nov 2000
    Location
    greenwich.ct.us
    Posts
    9,092

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width