-
Spooling output to another server
Hello good people i was wondering if anyone could help.
I'm a junior tester helping with some Disaster Recovery stuff, and i have to test weather data gets during the transfer to a second system when the plug is pulled on a primary system.
I am doing this by spooling the whole database into text files as a base line, then spooling again after the plug has been pulled then comparing the two sets of files to see if any data has been lost.
I was windering if there is a way to spool output to another server with larger diskspace as i need 200gig worth of diskpace and that simply isnt availabel on the database server.
so to sum up, is there anyway to spool output using sqlplus to another machine's directory??
All help is greatly appreciated, thank you.
-
Although not very clear about the OS (assuming linux/ux), you may create NFS mountings on your local machine for the other server. Since this involves the network, the performance is something to watch for.
There is always a better way to do the things.
-
spooling 200Gb of data will take you hours and hours and hours (and add a few more hours) especially over a network via NFS.
You could always connect to the database via a client on the target machine and write locally there.
Just seems a very bad idea to start with. cant you just grab a row count of all the tables, much quicker and will probably do for your needs
-
Do you really want to run diff on a 200gb text file?
I usually compare 1) rows counts 2) compare sum's on key number columns
and 3) get the application users to certify the app and data (like run balance sheet on node A, run balance sheet on node B).
"False data can act only as a distraction. Therefore. I shall refuse to perceive you." - Bomb #20
-
Still spooling 200GB on the local machine itself will be very time consuming..so better try other ways as suggested here for comparision.
There is always a better way to do the things.
-
I think I should be fine as the disk will be on a SAN connected by a data network
this should be ok right?
-
Since you are just testing it out - take a subset of data and test it out -before trying for 200gb
There is always a better way to do the things.
-
Originally Posted by Linux_cat
I think I should be fine as the disk will be on a SAN connected by a data network
this should be ok right?
no, you will be waiting forever
-
just curious, suppose your 200gb spool does complete in a timely fashion (which it won't).
And you can run a diff on the output. And diff finds some differences.
What will you do then? How will you figure out what tables are different? Will you have to edit your 200gb spool file and backtrack from the found difference? Or does finding any difference at all invalidate the whole DR strategy?
Just doesn't seem workable at all to me.
"False data can act only as a distraction. Therefore. I shall refuse to perceive you." - Bomb #20
-
You know what people, maybe it was a stupid idea i'm just still learning sorry. I think I will just do as tomcat suggested first, take a row count of all the tables, and a sum of the primary key on all the tables and output them into a file.
rollback/restore to the previous point let them run the transactions again this time with someone pulling the plug somewhere then run the script after everying has switched over to the backup system. Then compare the two sets of row counts/sum of primary key. If they are different then i will investigate it further. This sound any better?
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|