This DB is the backend to Oracles AS/Portal, its the MDR/Infrastructure.

The tuning has always been about CPU reduction, always been CPU bound. Twice Oracle recommended using less memory and I/O wait has never been the issue it usually is inside a Datawarehouse. We successfully load tested 2200 simulated concurrent users on the stack. We were all ready to head home when friday and launch sunday night when the application engineers were re-deploying their application and the Database went to 99% I/O Wait. Nothing could be identified as the cause. Eventually we restored our backups from the night before and to our astonishment the high I/O wait resumed immediately.

I got a Sev 1 Tar opened up and worked it through the nite with support in India. They never turned anything up with the DB. Hours later it dropped from 99% I/O Wait to much lower but still higher than at any time in the past, much too high for an idle DB connected to 2 AS servers.

I still think this is an I/O subsystem degradation due to a broken mirror or something, semi-failed controller. Hardware guys dismiss it immediately.

Any ideas ? Statspack, Enterprise manager have been utilized / uploaded / pondered but to no avail.

I say hardware.