DBAsupport.com Forums - Powered by vBulletin
Page 1 of 2 12 LastLast
Results 1 to 10 of 11

Thread: recommended datafile size

  1. #1
    Join Date
    Oct 2001
    Posts
    15

    recommended datafile size

    I've inherited an oracle 8.1.7 database running unix 2.6 on a Sun server. All tablespaces have 1 datafile that is currently 3-4 gig and automatically extends when space is needed. Is there any downside to having datafiles > 2 gig?

  2. #2
    Join Date
    Feb 2000
    Location
    Singapore
    Posts
    1,758
    depends. If you want to distribute I/O across several disks create multiple datafiles. If I/O is not a problem let it be as it is.
    Sanjay G.
    Oracle Certified Professional 8i, 9i.

    "The degree of normality in a database is inversely proportional to that of its DBA"

  3. #3
    Join Date
    Nov 2002
    Location
    New Delhi, INDIA
    Posts
    1,796
    Other than managing problems i don't see any down falls.
    Amar
    "There is a difference between knowing the path and walking the path."

    Amar's Blog  Get Firefox!

  4. #4
    Join Date
    Apr 2001
    Location
    Brisbane, Queensland, Australia
    Posts
    1,203
    Definately a managebility concern (oh, and an OS max filesize concern). I keep then either 2GB or 4GB. It's easier to find room for a 2/4 GB file is moving (or recovering) than a 20GB files. If you have to restore and recover a single datafile, smaller is definatly preferred.
    OCP 8i, 9i DBA
    Brisbane Australia

  5. #5
    Join Date
    Jan 2001
    Posts
    3,134
    2G seems to be the un-written rule. Makes managability a lot easier, as everyone has said. Nothing worse than looking for a 8 gig mount point for one datafile.

    MH

    Ps. I suppose having your eyes ripped out with a rusty fork would be worse than looking for a 8 gig mount point.
    I remember when this place was cool.

  6. #6
    Join Date
    Dec 2002
    Location
    Bangalore ( India )
    Posts
    2,434
    Ofcourse, there are some downsides of having bigger sized datafiles.

    1) As everybody said managability becomes a concern.

    2) As many will not aggree to this point..and had been detabe on this with no conerete outcome.

    The downside is...
    OS will always try to allocate, contigious block of Memory on harddisk...and if say while creation of datafile if you give size 2 or more GB, then OS cannot allocate conitigious block to it...rather peices of small blocks will be allocated..

    This would have indirect imapct on overall performance of the DB, as its easy to read the Data which is stored as Contigious Block of Memory on harddisk rather than dispersed/scattered blocks of Memory segments.

    So its always benificial if you have smaller datafiles, in the perspective of physical reads.

    That however small might be the duration to have the data read from scattered Memory blocks to that of Contigious Blocks...it will be significant, had it been a very large DB, or had it been a DB where active Transaction are very large.

    Abhay.
    funky...

    "I Dont Want To Follow A Path, I would Rather Go Where There Is No Path And Leave A Trail."

    "Ego is the worst thing many have, try to overcome it & you will be the best, if not good, person on this earth"

  7. #7
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    Originally posted by abhaysk
    The downside is...
    OS will always try to allocate, contigious block of Memory on harddisk...and if say while creation of datafile if you give size 2 or more GB, then OS cannot allocate conitigious block to it...rather peices of small blocks will be allocated..

    This would have indirect imapct on overall performance of the DB, as its easy to read the Data which is stored as Contigious Block of Memory on harddisk rather than dispersed/scattered blocks of Memory segments.

    That is, you are using plain disks and your queries are all using full table scans.

    Fortunately so far I have not seen a database running on plain disks (except my PC). If database is running in RAID 0+1 or 5 who cares about random, it´s random by nature in that configuration (so it can use fully the SCSI or Fiber channels)

  8. #8
    Join Date
    Dec 2002
    Location
    Bangalore ( India )
    Posts
    2,434
    Originally posted by pando
    If database is running in RAID 0+1 or 5 who cares about random, it´s random by nature in that configuration (so it can use fully the SCSI or Fiber channels)
    Could you be more explainative on this. I mean the mode of scan on the RAID hardisks...and how random search would not really matter much.

    Well, please dont yell at me by saying STFW..and so. I would rather search google..but would be confused to choose among them, the best.

    If any link on that, would be good.

    Abhay.
    funky...

    "I Dont Want To Follow A Path, I would Rather Go Where There Is No Path And Leave A Trail."

    "Ego is the worst thing many have, try to overcome it & you will be the best, if not good, person on this earth"

  9. #9
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    well just tell me how do you ensure contiguos space in a 100 disk RAID 0+1 or RAID 5? contiguos space in which disk?

  10. #10
    Join Date
    Feb 2000
    Location
    Singapore
    Posts
    1,758
    Abhay,
    Even if you have smaller datafiles (say 2GB or less), you have no control over how the blocks/sectors are filled in the hard disk. If there are other files/data on the disk your new files/data bolcks will be scattered around the disk.
    Windows disk defragmentaor is a simple example that gives you a visual idea of the data in the disk.

    Frankly,I won't be much concerned about disk level frgamentation.
    Sanjay G.
    Oracle Certified Professional 8i, 9i.

    "The degree of normality in a database is inversely proportional to that of its DBA"

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width