Hello, I have an application that has its data partitioned across schemas. In that I mean each schema has identically named tables / columns, but the data in each is unique.
A second application, that is not aware of this setup, needs to be able to query these tables as if they were one large table. One approach would be to make a view that performs the necessary (somewhat complex) query against each schema and UNIONs it to the rest. The catch is that the list of schemas changes periodically.
What methods are there to manage this? Having the owner of the application add on a new UNION block every time a schema is added is not very supportable. Writing a stored procedure to create the view is more reliable, but the owner is not too excited about having run it whenever a change is made. A third is to write a pipelined table function that reads the list of schemas from a configuration file (ie an external table), execute the appropriate query via dynamic SQL and pipeline it back. This depends on the config file being maintained in sync with the database, but editing text files fits in with the owners procedures better.
Are there any other ways this could be approached?
Last edited by jhmartin; 02-09-2007 at 01:48 PM.
Reason: Fix title
I'll raise my hand to that -- yes, it is ugly, although to be fair it isn't 'on the fly' so much as occurring infrequently when certain significant business events happen (lets say bringing on a new supplier).
The proper thing to do would be to put all the data all in once schema and have a field that defines the supplier, but changing that system is outside my authority. I just need to read from it.
What about moving the data in one table and making views in each schema, to see only its data, based on USERID (which should be an extra column into the table)
Usnig appropriate instead of triggers and views with check should do the trick.
Creating new schema will mean a new view and new usedid, but no other changes.
The data dictionary views are organized in similar way