DBAsupport.com Forums - Powered by vBulletin
Page 1 of 2 12 LastLast
Results 1 to 10 of 11

Thread: VLDB planning

  1. #1
    Join Date
    Feb 2001
    Location
    Adelaide, Australia
    Posts
    159
    Hi all,

    Looking for some advice as I am currently building what will become a two terabyte database.

    Initially I am sizing for 250 Gb.

    I am trying to come to impossible decision of how to size my datafiles.

    The database is an archive store for a live system and hence will not be running in archivelog mode.
    Therefore there is no reason to split the datafiles for the purpose of streaming hot backups to tape.
    Should I go with single datafiles for each tablespace (initially around 60 Gb for the data/indexes)?

    Has anyone a similar experience that they could share with me.

    Any advice would be most appreciated.

    Kindest regards,

  2. #2
    Join Date
    Oct 2000
    Location
    Germany
    Posts
    1,185
    What is your strategy in the case of media failure?
    David Knight
    OCP DBA 8i, 9i, 10g

  3. #3
    Join Date
    Sep 2001
    Location
    Makati, Philippines
    Posts
    857
    I just hope that you have enabled the large file option in your file system.
    But hey, is 60GB for a datafile doesn't exceed to your O.S. maximum file size limit?

  4. #4
    Join Date
    Oct 2000
    Posts
    467
    OS limit is 1TB by default. This can be set during the kernel setup. But creating a datafile of 60GB might be taking a risk of corrupting them. The max we have is 1 Gig.
    Vinit

  5. #5
    Join Date
    Nov 2000
    Location
    greenwich.ct.us
    Posts
    9,092
    Usually, I recommend that you go with multiple files around 2G. In your case, however, you would have too many 2G files to manage effectively.

    For performance reasons, I would still have multiple smaller files. I would look at how many devices are available for data and split the datafieles into (devices*devices) pieces.

    For example, say you have 8 devices (could be raid or single disk) and you are reserving 3 for redo, rollback, system, temp, etc (for math's sake). That would leave 5 available for data. If you spread your data (250G) out on all devices (5), you would have about 50G on each device. If you created 50G files, you might run into some hot spots that you can't move because your data file is un-manageable. I would take each 50G of data and split it 5 ways, or 10G files. That way, if you have a hot file you can re-locate it to another device to alleviate I/O contention.
    Jeff Hunter

  6. #6
    Join Date
    Feb 2001
    Location
    Adelaide, Australia
    Posts
    159
    So you would have say 500 1gig files then? At what size has it been proved that they begin to corrupt?

    David, as for the backup method, that is under discussion also.

    I was thinking block level incremental backups would be appropriate. (using Rman)
    Again, if you have any advice, it would be most appreciated.

    Would an export be ridiculous (cumulative)?

  7. #7
    Join Date
    Feb 2001
    Location
    Adelaide, Australia
    Posts
    159
    Thanks Jeff,

    I think the 5gb file option is the winner.

    Any advice on backups?

    Rgds


  8. #8
    Join Date
    Oct 2000
    Location
    Germany
    Posts
    1,185
    You mentioned that it is an archive store.

    Does that mean it will be used for reporting? Is there data that could be separated into read only tablespaces?

    David Knight
    OCP DBA 8i, 9i, 10g

  9. #9
    Join Date
    Feb 2001
    Location
    Adelaide, Australia
    Posts
    159
    David,

    It is an archive store, and will be used for historical reporting purposes.

    However, data will be added from live continuously throughout the day and deleted (say anything over 6 months) at night.
    Tis is done via the application, not replication.

    So read only tablespaces are not appropriate.


    TIA




  10. #10
    Join Date
    Nov 2000
    Location
    greenwich.ct.us
    Posts
    9,092
    You need to look at your Mean Time To Recovery (MTTR) when deciding about backups. If you have a long MTTR, then you can use BLI backups. If you have a short MTTR, you will probably have to take full hot backups. Of course, that means turning archiving on which has it's own concequences for space and I/O. Personally, I only like to go with BLI backups when I have limited tape capacity and a long MTTR.

    A lot also depends on your backup hardware. Can you backup 250G in a reasonable time period; say 4 hours?

    I also like what dknight is hinting about, maybe a mix of read-only and regular tablespaces. Backup the Read-only tablespaces when they change and backup the regular tablespaces on a regular basis.
    Jeff Hunter

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width