I'm trying to estimate SIZE parameter which can be specified during cluster definition. So I want to define cluster with only one attribute name varchar2(40) and add a table to this cluster. Average row length in this table is 29 bytes.
The disk block size is: 4096 bytes so the estimated number of bytes required
by an average cluster key and its associated rows is:
Assuming that you aren't talking about Real Application Clusters, but clustered tables. I have not actually setup a clustered storage tables. However, I worked form someone who had a database using clustered tables.
My understanding is that you use clustering to preallocate storage for one or a set of tables with a common key before you bring in any data. So you guess as to how many records you are prepared to have. And probably guess high, given the huge cost of resizing the clustered tables.
If this is an online transaction processing system (OLTP) then you will be able to do one lookup and get all of the data asscociated with one record in the database. This lookup is asymptotically referred to as O(n) time, or constant time, based on a hash key lookup.
So If you think that the average size is going to be 4096 bytes, then you want to make sure that every row will fit within that size. There may be a way for Oracle to handle more than 4K of data, but you won't maintain O(n) lookup if that happens very much.
I would use the longest likely row for size and make sure that I had enough slack for double the number of records. With this type of storage you are better off to waste space and over allocate that to run short on space.
There are many limitations for using clustered storage. Knowing the number of records that you are prepared to handle is one of those limitations. Instead of this solution you might look into using partitioning instead.