Hi Gurus,
Oracle : 9.0.1
OS : Linux 7.1
Memory : 6 gigs
I got 2 basic questions:

1. We have newly set up our production server A, for which a good backup strategy has to find out. Database will run in archivelog mode. and the tape drive is attached to another prod server B. I kept the log file size as 10 megs, and based upon our business requirements, there are near about 1,00,000 records deletions,and same/more number of insertions too and found near about 65 logs are generated for every 1,00,000 deletions. I configured an NFS mount point between A and B for moving these archives. But the problem is when i was testing with deletions of 10,000 records per transaction, for every third delete (no commits in between), my log file size is varying. my log buffer size is around 16 megs. why is it so? but when i say commit for every 10,000 records, log size is constant. Why is it so???
We are planning for cold backups,full exports also. So, every friday night, one full cold backup and daily a full export to tape. How to shift these backup and .dmp files to tape whcih is off the machine? and Please suggest me a good strategy. My DB size is around 70G . It's a prod server. our Management is not interested in Hot backups. Please suggest me a good strategy so that, we shouldn't loose much data if any failover occurs.


2. RAM:6 Gigs, Linux 7.1

How much should be the SGA??
db_cache??
log buffer?
Is there any formula for these things??

Machine is not accepting for more than 1.7 G for SGA. Its hanging above 1.7 G

Please suggest me the appropriate values.

Any help would be great help for me. It's comming live early next week.

Thank in advance