How much can I expect to grow the SGA and Data Dictionary of a database containing 137,000 objects (tables & indexes) if it doubles in size? Currently this database's SGA is around 600MB in size with the data dictionary ranging from 400 to 500MB. Will there be a one-to-one increase is size (i.e. go from 600MB to 1.2GB) or something smaller?
My suggestion would be just don't blindly increase your SGA, instead run the statspack and collect statistics and then decide whether to increase your SGA or not.
Life is a journey, not a destination!
Running statspack will be a good idea , And If you are not well verse with statspack , you can also check the following using the vdollor tables.
1. Buffer Cache Hit Ratio. ( > 90% would be nice )
2. Dictionary Cahe hit Ratio/Row cache hit ratio ( > 90% )
3. Libarary Cache hot ratio ( > 90% )
And you can also have a view on v$waitstat , where you can see waits on didferent segments .
Some times bad query can cause the whole mess. So need to identify the poor running SQLs and tune them ASAP. And then You can have real stats/view on the database. ( I bet you may need to bounce DB after tuning the poor running SQls).
Just by increasing SGA may not cope up with poor running SQL.
Thanks for responding...
I'm aware I can run statspack or utilb/e (this is a 7.3.4 database) after the fact. I'd like to know if I can determine what doubling the number of objects in the database will mean for the SGA and Data Dictionary prior to increasing the number of objects.
Let me ask this differently... Lets say I need to create a database for an application and I don't have any benchmark numbers from the application developer to initially configure the database. How do I determine the correct parameter settings for the various SGA and Data Dictionary related init file entries?
I find it hard to believe that this is purely a process of hit-or-miss or plug in a vaule and monitor with statspack or utilb/e.
I little estimation is required at first, testing should be performed on the development platform, becasue you'd be testing prior to implemenation....righhhhtttt.
OCP 8i, 9i DBA
You are correct...it should be performed on a development platform first. I'm not sure how much of an option that is for me. I need to take a database that currently contains 137,000 objects (tables and indexes) and grow it by another 97,500 objects.
At this point, I'm trying to determine the resource impact to the database as well as the server. My development and test environments for this application don't share similar servers or resources with production. That's why I'm trying to do a little "what if" simulating.
Any help is greatly appreciated....
I can not image you have such a big database . Is it really ?
How much time it takes even you do a simple selection:
select * from tab
I think this kind of database is not possible to be maintained :-)
Currently the database has great response time. All our stats are > 90%. I'm just trying to determine the impact of creating additional objects for both the database and the server.
Are you ONLY counting TABLES and INDEXES... where the owner != to 'SYS' or 'SYSTEM'?
I must say that is alot of database objects. Sys and System objects (ALL_OBJECTS) occupy up 23,000. Still an additional 75,000 ..... are you sure that's just INEDES and TABLES... man....!!!
As the SYS user, run the following
select count(*) from all_objects
where owner not in ('SYS', 'SYSTEM','PUBLIC','OUTLN','AURORA$JIS$UTILITY$',
AND OBJECT_TYPE in ('TABLE', 'INDEX');
I'm just curious....
OCP 8i, 9i DBA
I ran your select statement...there are 136,295 objects currently in the database. My number of 137,000 was rounded up based on the tablespace configurations for the owner/schema. This database supports an application called Baan.
Click Here to Expand Forum to Full Width