I'm not sure I explained my question properly, let me try it this way.
Let's compare two systems that generate 3gb/hr.
One has 3 - 100mb redo logs and the other has 3 - 1gb redo logs.
If the transactions were evenly distriubted over time, the first system would generate a log file every 2 minutes. The other every 20 minutes.
What's the rationale for sand-bagging I/O? Would it be better to more evenly distribute it over time?
Huh? Define Sand-bagging i/o?
It all depends on how the system is layed out...
Sure I/O will be terrible if you share drives between the arch file systems and redo logs (raw, file system).... But If you don't and have dedicated devices for each, then what does it matter if occasionally I/O spikes for 10 seconds to generate an arch log?
It better to generate fewer archive logs to reduce checkpoints, but in the end this must be ballanced by many other factors(ie data guard, backups etc).
I guess all i'm saying is a 2M log file is generally a bad idea, no matter what the system. It might be ok for say a read only or a DW where all loads are direct path and very little redo is generated.