I need help figuring out a practical solution for handling big bursts of archive logs.

OS: RedHat AS 3

From time to time we have a process that writes 20gb of archive logs in about 1/2 hour. It floods our available space (17gb reserve) for these logs and hangs the database.

Adding diskspace to the ASM diskgroup will obviously help but we'd I want to avoid the downtime of adding new raw disk devices.

I've thought of writing a pl/sql job that checks the rate of archiving and if it hits a threshold, then trigger an rman process to flush the logs, provided it's not already running.

Am I over-thinking this?