-
how to calculate the table size.
today i attend a interview for position of oracle dba...the interviewer asked me the question... how to calculate the table size, let a table contain 5 fields with ordinary datatype integer and varchar.. with one million records..what will be table size or capacity(Gb or mb) ?
Is there any formula to calculate the table data size? and how to check the table size....
and too many questions on performance tunning
i hope so, i last today's interview, please anybody sends me links of interview question on oracle performance tunning and general dba question.... i will very grate full, anybody pass me...
my mail id bsabdulkareem@yahoo.com..
regard
kareem
-
analyze table
avg_row_len (in bytes from dba_tables) * no_of_rows
Assistance is Futile...
-
sum(bytes) from dba_extents ...
-
This sounds like a predictive question, rather than a measurement one.
My answer: There's a formula in the documentation to help with this, but the best method is to load a representative sample of data and then analyze the table (as above) to measure the usage.
-
Hmmmm, everyone get out their picture pages!
Code:
SQL> select avg_row_len from dba_tables
2 where table_name='TRACK'
3 and owner='MPI2';
AVG_ROW_LEN
-----------
244
SQL> select count(*) from mpi2.track;
COUNT(*)
----------
462404
462404
* 244
-------
112826576
SQL> select sum(bytes) from dba_extents
2 where owner='MPI2'
3 and segment_name='TRACK';
SUM(BYTES)
----------
142606336
errrr ummm??
Oracle it's not just a database it's a lifestyle!
--------------
BTW....You need to get a girlfriend who's last name isn't .jpg
-
Do people really ask questions like this in an interview.
We are no longer in an age where DASD is super expensive and we never break out the lide rule and compass to compute a table. It usually falls under two catagories, small or large.
K.I.S.S.
I remember when this place was cool.
-
Originally posted by Mr.Hanky
Do people really ask questions like this in an interview.
We are no longer in an age where DASD is super expensive and we never break out the lide rule and compass to compute a table. It usually falls under two catagories, small or large.
K.I.S.S.
Quite so, yes. It used to matter when there was a 121(?) extent limit on tables, but like you say it's not a big issue now. Even if you're sizing for hardware reasons then you're more likely to find that the critical measurement is i/o's per second or sustained bandwidth rather than disk volume.
-
Originally posted by slimdave
Quite so, yes. It used to matter when there was a 121(?) extent limit on tables, but like you say it's not a big issue now. Even if you're sizing for hardware reasons then you're more likely to find that the critical measurement is i/o's per second or sustained bandwidth rather than disk volume.
I am surprised, I thought if anything you may want to be a little more carefull in a DW environment. I could see getting out the micrometer in that case, especially if the data is stagnent and you are dealing with HUGE amounts of data.
We are in an age of cheap DASD, locally managed TBSP's, limitless extents, ect... That is why this "question" is so baffling to me, sounds like someone that has no real world experience with Oracle.
I remember when this place was cool.
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|