-
I presume that you have the rman going on. On such case, if the time frame is kind of precise, when the process gets run, then invoke the rman couple of time during the process timeframe and delete the logs as you go.
I have a system, that would dump a tone of log every night. So, my policy was to delete all the logs that are older than one week. This run happens four times a day, hence keeping every other databases in good shape. Or you can delete the archive logs when you do your level 0.
This should release some space for you.
Again, as you may know, running rman during the peak is also a bit resource intensive too :(
Thanx,
Sam
Thanx
Sam
Life is a journey, not a destination!
-
Here's my log flushing rman script:
-----
spool log to '/u1/app/oracle/backup/flush.log';
run {
allocate channel d1 device type disk format '/rman/logfiles/%d_%u_%p';
BACKUP ARCHIVELOG ALL DELETE INPUT;
release channel d1;
}
exit;
spool log off;
-----
I didn't know that you could fire this off more than once. Is that really possible? We had a problem when 2 of these were run on the same db. I believe it was something like control file locking. I think we may have had a "backup current controlfile" included in that script. Maybe that was the problem?
I'm going to run it when x number of logs have accumulated (and it's not already running - jury's still out on this).
The rman script removes the logs from the arch dest after they're backed up /rman/logfiles (nfs mounted fs).
Thanks for your replies and ideas,
Ken
-
Yea, me too. Unfortunately I'm limited to the disks I have in that ASM diskgroup.
Another reason, not to use ASM.
Tamil
-
Originally Posted by tamilselvan
Another reason, not to use ASM.
Humm. Just curious, what are your other reasons, not to use ASM?
-
If you have two disks then modify the script
spool log to '/u1/app/oracle/backup/flush.log';
run {
allocate channel d1 device type disk format '/rman/logfiles/%d_%u_%p';
allocate channel d2 device type disk format '/rman/logfiles/%d_%u_%p';
allocate channel d3 device type disk format '/rman/logfiles/%d_%u_%p';
BACKUP ARCHIVELOG ALL DELETE INPUT;
release channel d1;
release channel d2;
release channel d3;
}
exit;
spool log off;
This should help you to allocate multiple channels and speed up your backup process. But again, one thing be cautious, if you can spread the channels across different controllers that will give you a real gain.
When I said couple of time earlier, I ment it to be in sequential order and not in parallel. You can parallelize by opening multiple channel. If you have some knind of tape backups like Tivoli/Veritas, you can blow them to that directly rather than backing them up on to disk and then to tape.
Thanx,
Sam
Thanx
Sam
Life is a journey, not a destination!
-
Thanks Sam.
I take it (# procs + 1) is a general rule to get the most out of the machine?
This is a 4 proc server. I'm concerned about slowing down transactions if I use more than 1 processor. Maybe I should write the script to use 1 channel during peak usage and 5 channels during off-hours.
-Ken
-
I would say four is plenty enough. If you find the process is chewing the resouces, then you can identify the RMAN session proces and nice them out to a low priority.
Sam
Thanx
Sam
Life is a journey, not a destination!
-
OK.
Here's the script I'm going to run (via cron/sh) that determines if I need to flush. It's hard coded to 10 logs (1 gig)
set serveroutput on
set feedback off
declare
rman_channels_open number;
num_logs number;
flush_logs varchar2(10);
begin
select count(*)
into rman_channels_open
from v$session
where program like 'rman%'
and client_info like 'rman channel%';
if rman_channels_open = 0 then
begin
select count(*)
into num_logs
from v$archived_log
where status = 'A';
if num_logs > 10 then
flush_logs := 'FLUSH'; --flush
else
flush_logs := 'CYCLE'; --not enough logs, flush next cycle
end if;
end;
else
flush_logs := 'RUNNING'; --flush or backup already running
end if;
dbms_output.put_line(flush_logs);
end;
/
exit;
/
-
It looks promissing, if you are going for 10 archive logs, I would think it is too much of a kill. But again it depends on your archive log size. If it were to be a 5MB archive then 10 is too much of an over kill. On the other hand if it were to be 256M then I'ld think its o.k. But would make to wonder why one would set the logsize to be of 256MB
Good luck.
Sam
Thanx
Sam
Life is a journey, not a destination!
-
Shell Script -possibly
I have a shell script which runs every 15 minutes which zips the logs up, it first checks to see if they are being actively used (so you don't zip one up currently being archived)
I can send to anyone if you would like it
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|