We are now running our database on filers via NFS mounted volumes. I am running 9206 on Solaris 9 64-bit.
We recently had a need to do maintenance on the filers which required a reboot of the filer.
Out of curiousity, I left the all the database running on the mounted volumes up. Of course, we sent notification anyway that there will be an outage, but I wanted to see how the database would respond to the reboot of the NetApp filers - so I left them running while the NetApp filers were rebooted.
During/After the reboot - ALL the database continued to run without issues and has been up and running since without being restarted.
I don't see any errors in the alert log files either. And while the datafiles for the database is on the filers, the alert log file is stored on the local disks.
So I don't understand....Even if no users where using the system at that very moment, isn't Oracle doing physical reads/writes internally for itself, let alone checkpoints, commits, dirty buffers threshhold, etc.
Lastly, could it be that the reboot just happened too fast (it only took 40 secs to reboot the filers) for Oracle to have the chance to do any physical read/writes??
In our Experience: Oracle can withstand an 'outage' from NetApps for approx. 3 minutes, PROVIDING, it does not attempt to perform a physical write. It does buffer and retry. The majority of the times, the netapps filers have caused most of the databases to crash. Most of those come up with normal crash recovery, but in a few isolated cases, we had to restore from backups (in one case - because the datafile size was truncated.)
Your 'experiment' may have been fine this time, but next time, you may find yourself restoring all of your databases.