The scalability depends on the hardware most of the times. The amout of memory and strorage that you could stuff in. Also others factors like clustering and etc. It can currently scale up to couple of terabytes. It also depends on the type of application, like DSS, OLTP, and etc
On the other hand as far as the performance is concern, you could have a very fast system and if your application was not efficiently designed, then it could badly boug down the performance. So basically the performance depends majorly on the design optimization. Then enabling the other features like the MTS and etc could scale your user concurrency to a good size.
You prabably need to define the 'scalability' word itself in your context. What exactly are you looking for ?
You have a product. Do you want it to cater to small as well as big customers ? If that is the case, normally that 'small' and 'big' gets convereted into volume of data that gets generated. Is this what you mean by scalability ?
As the customer's data becomes more volumenous, the platforms change. Is this what you mean by scalability ?
Size of data will also mean larger number of transactions in an OLTP application. So, is the capability of handling larger transactions is your definition of scalability.
There can be another angle of looking at the scalability word. Now-a-days all application front ends tend to talk to the database through ODBC drivers, or some such thing. Theoratically, it should hence be possible to make your front end independant of backend database. So, the product will run on SQL Server as well as Oracle - because, in some cases you may not be able to dictate choice of database to your customers. Is this what you mean by scalability ?
And as far Oracle inherant scalability is concerned, its capable of hosting any practical database. As sambavan has pointed out, in quite a few cases, Oracle's scalability gets limited by the OS. Simple example, even though Oracle can access datafiles larger that 2GB, it may not be possible if you are running 32-bit OS.
Right now we are not considering platforms being changed, as the databases run on Sun Netra T Solaris boxes. (If need be, we will consider that)
We are also not worried about operating multiple databases with the same jdbc code.
The following things will happen to our database when we scale:
Data Stored increases
Number of updates (or transactions)/second increase.
Our main concern is the number of updates/second. What determines this? We just want to see how much Oracle database can handle. At what point we would need additional hardware to support it and so on. Where can I find such information?
Okay, if your concerns are limited to this, you need to study the hardware platform in more details.
Because ultimately, the number of updates/second performance will depend on the disk i/o speed and the amount of load the CPUs are handling. I suggest, go through the TPC-x benchmarks that get always published by the harware vendors - both the server and disk.
The decision as to "When to add more hardware" will solely depend on what performance you want to have. If a 100,000 update transaction takes 6 hrs and it is acceptable to you, you do not need the h/w. But if you need it to be over in 2 hrs, yes you need better disks and more horsepower.
You may check the hardware vendor sites for the benchmarks. Your criterias all soly depends on the hardware configuration like the memory, bus size, disk speed and primarily on TLB size (Translation Lookaside Buffer)
Primarily we tend to forget to look into Bus size and the TLB which could be the major cholk points on a high performance system