I am just going through resize a few database objects and came across some large datafiles and it reminded me about a thread on here about datafile sizes, but I can't remember the content of that thread..
Is a 10 gig datafile on a Sun Solaris machine too big ? If it is too big, why ? and if I should change it, how big should I create the datafiles.
Thanks in advance.
If the OS is 64 bit and you have enabled the large file option at the OS level, then it's ok to have a large dbf. Else it is recommended that it not be more than 2047M. My OS is 32 bit w/o the large file option and I do not let the size of any file exceed 1995M. With a 64 bit OS and large file option, you can have a file as large as 1 terabyte.
(Assuming you have setup your OS to support 10G files)
Is your tablespace spread out between multiple 10G files? If so, then no, 10G is probably not too big.
Is all the data for your tablespace in one datafile? If so, then yes, 10G is probably too big.
You really have to look at your I/O distribution to answer that question. If that one file is getting pounded and no other file is getting accessed, then you will have disk contention.
Halo, thanks for that, it's pretty much what I thought.
Marist, I was going to go through the structure of the datafiles, but I probably just have to say that the machine only has 5 physical disks, a system disk and 4 Oracle disks. If you have so few disks, is it still better to split the datafiles on the same physical disk ? would that help IO contention ?
No, it would not help in case of IO. you better split your datafiles among the different drives.
what file system are you using also one of the factor of IO.
Sun Certified Sys. Admin
By splitting the 10G datafile into multiple 2G datafiles you might bypass some overhead of managing the large file. However, the performance increase would be minimal.
Click Here to Expand Forum to Full Width