I used to be a big proponent of gzipping archived logs. 9i changed my opinion and compressed (archivelog) backups using rman pushed this idea further away.
When gzipping archived logs, I'd be concerned about recovery involving an unzip and the cpu gzip uses to compress/decompress (even though you can gzip -1 and sacrofice a little compression for less CPU drain.)
We got slammed with millions of updates in a period of 1 hr 15 min again last night. It put this theory of on-demand archive log flushing via rman to the ultimate test.
And it turned out much better than expected!
At peak times we were creating a 100mb archive every 7.5 seconds.
We maintained under ~2.5 gb used by archived logs on the diskgroup at any given time. So much for needing to add more disks, this process just used them more efficiently.
The process archived & flushed a total of 43 GB during the 1 hr 15 min
I/O was spread evenly across devices. ASM worked flawlessly.
Nice to hear some feedbacks
Life is a journey, not a destination!
Click Here to Expand Forum to Full Width