CREATE CLUSTER - SIZE parameter
DBAsupport.com Forums - Powered by vBulletin
Results 1 to 2 of 2

Thread: CREATE CLUSTER - SIZE parameter

  1. #1
    Join Date
    Apr 2009
    Posts
    1

    CREATE CLUSTER - SIZE parameter

    Hello,

    I'm trying to estimate SIZE parameter which can be specified during cluster definition. So I want to define cluster with only one attribute name varchar2(40) and add a table to this cluster. Average row length in this table is 29 bytes.

    The disk block size is: 4096 bytes so the estimated number of bytes required
    by an average cluster key and its associated rows is:

    floor(disk_block/avg) = floor(4096/29) = floor(141,24) = 141 bytes.

    Note that for each cluster key (name) we will have only one associated
    row.

    Am I going the right way or is this a complete non-sense ?
    In case I'm wrong - how can I estimate the SIZE attribute ?


    thank you,
    student

  2. #2
    Join Date
    Nov 2000
    Location
    Pittsburgh, PA
    Posts
    3,999
    Assuming that you aren't talking about Real Application Clusters, but clustered tables. I have not actually setup a clustered storage tables. However, I worked form someone who had a database using clustered tables.

    http://www.stanford.edu/dept/itss/do...ments_5001.htm

    My understanding is that you use clustering to preallocate storage for one or a set of tables with a common key before you bring in any data. So you guess as to how many records you are prepared to have. And probably guess high, given the huge cost of resizing the clustered tables.

    If this is an online transaction processing system (OLTP) then you will be able to do one lookup and get all of the data asscociated with one record in the database. This lookup is asymptotically referred to as O(n) time, or constant time, based on a hash key lookup.

    So If you think that the average size is going to be 4096 bytes, then you want to make sure that every row will fit within that size. There may be a way for Oracle to handle more than 4K of data, but you won't maintain O(n) lookup if that happens very much.

    I would use the longest likely row for size and make sure that I had enough slack for double the number of records. With this type of storage you are better off to waste space and over allocate that to run short on space.

    There are many limitations for using clustered storage. Knowing the number of records that you are prepared to handle is one of those limitations. Instead of this solution you might look into using partitioning instead.
    this space intentionally left blank

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  



Click Here to Expand Forum to Full Width