It depends on the concurrency the tablespace will experience. If the files will be stored on the same disk, there is little difference between using a large datafile or several small datafiles. However, IMHO, fewer datafiles are easier to maintain than many.
If you plan to partition data, it is advisable to separate the partitions on different disks to get some performance gain.
An ounce of action is worth a ton of theory.
Originally posted by alapps If the O/S is 32bit, 4GB files are NOT an option unless a third party volume management software is used...
This is not correct for Windows NT, Filesysze (NTFS) can be bjond 2G. For example we had a datafile with 12G on NT;
On 32-bit Linux u can use the ext3-Filesystem which includes large-file-support.
on beos (to god for this it-world) filesize was manged even by a 64-Bit Pointer!
I am german :-) and i want do everything in allright so i use file-sizes
: 64M, 256M, 512M , 1G, 2G
i do not like to have a dschungel in my db-filestructure.
But this has no effect on db-performance
Originally posted by Orca777 . . .for Windows NT, Filesysze (NTFS) can be bjond 2G. For example we had a datafile with 12G on NT.
You may find periferal considerations for not having too big a size - e.g. PKZIP fails at 4Gb (I use it to refresh my Standby across the network). In future I would use 2Gb as a limit - but then mine is only a 10Gb db.
"The power of instruction is seldom of much efficacy except in those happy dispositions where it is almost superfluous" - Gibbon, quoted by R.P.Feynman