You can also use shared disk to setup failover at the OS level by setting up cluster Eg: Veritas Cluster.(Oracle will be installed on the shared disk and can be accessed from a single node. Incase the node goes down then failover happens and Database will be accessed from a second node)
Originally posted by marist89 Not really sure what you are trying to do.
Sure you can share disk between hosts.
I thought that if two oracle instances were going to hit the same datafiles ( rather than dividing the disk space ) you had to have parallel server or rac or grid to sort out the conflicts.
Sorry if I am not making much sense here, I rotated off Oracle for sql server firefighting the last 3 months. Oracle is stable but they want me to move it to new platform and if I do that I want to build something interesting.
I have already proven that a read-only ( except for new partition loads ) data warehouse needs nothing special to have 2 instances read the same datafiles, you just use transportable tablespaces to inform the instance that did not perform the data load.
But this time its OLTP. Wondering if I have any options to share datafiles and not just the disk system in a read-write OLTP system.
Originally posted by marist89 Two instances same set of datafiles without RAC, no way.
I thought so. But it does work for DW if you just need read-only tablespace, but then your sort of forced to make each new partition live in its own tablespace so its awkward for frequent updates which its seems like everyone now wants their DW updated realtime.
Thanks rad_jen, I forgot about veritas, if its a lot cheaper than RAC then we could get underway with the shared disk and still have failover capacity.
Originally posted by BJE_DBA I thought so. But it does work for DW if you just need read-only tablespace,
Will you please show me some docs (in the net) explaing this functionality. ie two or more instances in a OS clustered environment working on the same database without OPS or RAC. Very much interesting.
Technical Lead (Databases)
Thomson Reuters (Markets)
When I ran a terabyte Datawarehouse for qwest and one for AT&T I presented the 'poor mans' approach to management but recommended RAC.
'Poor mans' allowed no updates. We loaded our DW monthly so 24 partions on all the tables for a 2 year window was nothing. We were going to daily processing so I looked into it and was comfortable with 712 partitions/tablespaces based on the type of queries we typically ran against it. The load automation would go like this, add a new partition in a new tablespace, bulk load it, index it, exchange it including indexes for all such fact and dim tables. Analyze it.
Set tablespace to read-only
Export the DD info as Transportable Tablespace, Import it into the 2nd machine.