DBAsupport.com Forums - Powered by vBulletin
Results 1 to 4 of 4

Thread: How can full rman backup be speeded up?

  1. #1
    Join Date
    Aug 2023
    Posts
    10

    How can full rman backup be speeded up?

    Is there any way to speed up the execution time of the full RMAN backup?

    With device tape I can use the rman parameter "configure backup optimization on;" ?
    It would appear not: http://pages.di.unipi.it/ghelli/dida...mconc1008.html

    To check the RMAN backup bottleneck, should I run 'backup validate' or are there other faster checks?
    If the time for the BACKUP VALIDATE to tape is about the same as the time for a real backup to tape, then reading from disk is the likely bottleneck.
    If the time for the BACKUP VALIDATE to tape is significantly less than the time for a real backup to tape, then writing to the output device is the likely bottleneck.

    My configuration

    Current time spent: 50 hours.
    Datafiles on ASM.
    Device type: sbt_tape.
    CPU number:16
    Oracle instance size: 4.4 TB

    Multiplexing
    Default maxopnefiles: 8
    Default fileperset: 64

    BLKSIZE= 4 MB (For SBT backups, the output buffer size can be increased using the BLKSIZE channel parameter. The default tape buffer is 256 KB).

    BACKUP_TAPE_IO_SLAVES=true
    LARGE_POOL_SIZE = number_of_channels * (16MB+(4 * size _of_tape_buffer)) --> 4 * (16 MB + (4 * 4MB)) = 128 MB


    RMAN parameters
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS;
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default
    CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
    CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/u01/app/oracle/product/19.0.0/dbhome_1/lib/libosbws.so,SBT_PARMS=(OSB_WS_PFILE=/u01/app/oracle/product/19.0.0/dbhome_1/dbs/osbwsDBNAME.ora)';
    CONFIGURE COMPRESSION ALGORITHM 'MEDIUM' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;

    RMAN script
    run {
    allocate channel dev1 device type SBT parms 'BLKSIZE=4194304,SBT_LIBRARY=/u01/app/oracle/product/19.0.0/dbhome_1/lib/libosbws.so ENV=(OSB_WS_PFILE=/u01/app/oracle/product/19.0.0/dbhome_1/dbs/osbwsDBNAME.ora)' MAXPIECESIZE 10G;
    allocate channel dev2 device type SBT parms 'BLKSIZE=4194304,SBT_LIBRARY=/u01/app/oracle/product/19.0.0/dbhome_1/lib/libosbws.so ENV=(OSB_WS_PFILE=/u01/app/oracle/product/19.0.0/dbhome_1/dbs/osbwsDBNAME.ora)' MAXPIECESIZE 10G;
    allocate channel dev3 device type SBT parms 'BLKSIZE=4194304,SBT_LIBRARY=/u01/app/oracle/product/19.0.0/dbhome_1/lib/libosbws.so ENV=(OSB_WS_PFILE=/u01/app/oracle/product/19.0.0/dbhome_1/dbs/osbwsDBNAME.ora)' MAXPIECESIZE 10G;
    allocate channel dev4 device type SBT parms 'BLKSIZE=4194304,SBT_LIBRARY=/u01/app/oracle/product/19.0.0/dbhome_1/lib/libosbws.so ENV=(OSB_WS_PFILE=/u01/app/oracle/product/19.0.0/dbhome_1/dbs/osbwsDBNAME.ora)' MAXPIECESIZE 10G;
    backup as compressed backupset incremental level 0 database include current controlfile format = 'dbname_backup_lv0_%Y%M%D_%t_%U' plus archivelog;
    crosscheck backupset;
    delete expired backupset;
    delete noprompt obsolete;
    show retention policy;
    release channel dev1;
    release channel dev2;
    release channel dev3;
    release channel dev4;
    }

    Best regards

  2. #2
    Join Date
    Feb 2024
    Location
    Dubai
    Posts
    1
    In my experience, I don't think you can speed up the execution time of the Full RMAN Backup, but have you found any other way to speed up the process? If so, then please tell me how. I'll be waiting for your answer; thanks!

  3. #3
    Join Date
    Aug 2023
    Posts
    10
    The problem was due to the fact that the database was in one AWS region and the bucket was in another AWS region.
    So
    1) I configured the S3 bucket in the same region as the oracle database.
    2) In addition, I configured the following aws parameters:
    aws configure set s3.max_concurrent_requests 50 --profile
    aws configure set s3.max_queue_size 10000 --profile
    aws configure set s3.multipart_threshold 64MB --profile
    aws configure set s3.multipart_chunksize 32MB --profile
    aws configure set s3.max_bandwidth 100MB/s --profile

    Execution time: 7 hours.
    Last edited by Yuri; 03-08-2024 at 03:39 AM.

  4. #4
    Join Date
    May 2024
    Posts
    1
    Quote Originally Posted by Yuri View Post
    The problem was due to the fact that the database was get phenq from gnc in one AWS region and the bucket was in another AWS region.
    So
    1) I configured the S3 bucket in the same region as the oracle database.
    2) In addition, I configured the following aws parameters:
    aws configure set s3.max_concurrent_requests 50 --profile
    aws configure set s3.max_queue_size 10000 --profile
    aws configure set s3.multipart_threshold 64MB --profile
    aws configure set s3.multipart_chunksize 32MB --profile
    aws configure set s3.max_bandwidth 100MB/s --profile

    Execution time: 7 hours.
    Agreed with you!
    Last edited by Pmishti; 06-04-2024 at 06:22 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width