-
DB architecture geopgraphically distributed
I need to design the DB architecture for my client, the best possible approach. This is the scenario -
There would be a software running in more than 200 locations which we have called local sites, connected to Database. There is also a Central site with a BI application connected to the same Database.
The volume of data is not an issue, it is the geographical locations a problem.
What is the right approach for this kind of architecture -
1. Go for the cloud option?
Get a powerful Server. Put the BI application, the Database and the software that will run on the local sites all on the same Server ?
OR
2. Get 2 servers, one for Central site (BI application) and one for local sites software and the Database
Let the local sites software be used as Saas with the Database and the BI application on Central Site connect to the Database remotely?
OR
3. Get 5 servers, one for Central site with the Database, 4 servers for local sites distributed geographically
In this case use one Database and 5 instances one for each server? One instance for Central and 4 instances for local?
I am a bit confused. Any help, idea would be really appreciated. Thanks.
-
Assume for the sake of simplicity that you have a single database with a data guard standby, that you have 200 sites with some larger number of users using a three tier web app to connect to that database, how much data will need to travel back and forth from each user to the web servers, and then to the database? If it was me I would have a single database with a standby, a pool of application servers and a pool of web servers to handle the application. You can locate web and application servers in a few geographic areas, or one in each site depending on your budget. But it can be really expensive to replicate data across a wide area, unless each site has their own data and doesn't need data from any other site.
-
Originally Posted by gandolf989
Assume for the sake of simplicity that you have a single database with a data guard standby, that you have 200 sites with some larger number of users using a three tier web app to connect to that database, how much data will need to travel back and forth from each user to the web servers, and then to the database? If it was me I would have a single database with a standby, a pool of application servers and a pool of web servers to handle the application. You can locate web and application servers in a few geographic areas, or one in each site depending on your budget. But it can be really expensive to replicate data across a wide area, unless each site has their own data and doesn't need data from any other site.
Actually I am in the phase of deciding on the database. The volume of data between the Server and database is not much. Its a non profit organization and has their offices/sites in 200+ different locations. So the data sent to the Database from these sites is donations received each day and the formalities done for the issuing of loans.
The Central site with the BI application would need to read the data from all these sites for generating various reports.
So I guess the ideal solution probably is to have 5 App/Web servers connected to around 50 sites. And a Central Site that gathers information from these 5 servers for reports. The question is whether to have 1 Database and 5 instances one for each server or 5 different Databases?
-
Originally Posted by saracooper
The question is whether to have 1 Database and 5 instances one for each server or 5 different Databases?
Wouldn't it be easier to have one database with one schema for all of the sites? You should probably
have them log in as a distinct user for each site and create a single set of PL/SQL packages to give
them just the functionality that they need and then give each user execute privileges and create global
synonyms for the packages. That way no one can accidentally delete the data or change the data.
-
Originally Posted by gandolf989
Wouldn't it be easier to have one database with one schema for all of the sites? You should probably
have them log in as a distinct user for each site and create a single set of PL/SQL packages to give
them just the functionality that they need and then give each user execute privileges and create global
synonyms for the packages. That way no one can accidentally delete the data or change the data.
My client seems to be keen on the idea of having 5 different Servers (physical machines). Even in this case you think it to be a good idea to just have DB/schema and have the software on all 5 machines connect to the DB on one server?
-
It is hard to say much much horsepower you will need from the database. If you have say
300-1200 connections uploading 1-20 MB of data a day and a couple of reporting processes
running reports several times a day, then yes a single database can handle it. A quad core
server with 8-16GB of RAM should be plenty powerful enough. From a Oracle licensing perspective
memory is cheap, CPU cores are not. Having many disks is also relatively cheap.
You can always do a version two where there are satellite database servers that feed into
a central server, but you might not need that. In general replication is expensive in terms
of performance and database cost.
-
It is hard to say much much horsepower you will need from the database. If you have say
300-1200 connections uploading 1-20 MB of data a day and a couple of reporting processes
running reports several times a day, then yes a single database can handle it. A quad core
server with 8-16GB of RAM should be plenty powerful enough. From a Oracle licensing perspective
memory is cheap, CPU cores are not. Having many disks is also relatively cheap.
You can always do a version two where there are satellite database servers that feed into
a central server, but you might not need that. In general replication is expensive in terms
of performance and database cost.
-
Originally Posted by gandolf989
It is hard to say much much horsepower you will need from the database. If you have say
300-1200 connections uploading 1-20 MB of data a day and a couple of reporting processes
running reports several times a day, then yes a single database can handle it. A quad core
server with 8-16GB of RAM should be plenty powerful enough. From a Oracle licensing perspective
memory is cheap, CPU cores are not. Having many disks is also relatively cheap.
You can always do a version two where there are satellite database servers that feed into
a central server, but you might not need that. In general replication is expensive in terms
of performance and database cost.
Ok thank u very much for your advice. Does help me a lot in making decisions.
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|