Ok guys and gals,
I have the current disk setup
Service Processor 0
LUN 0 - 2 32 GB DRIVES IN RAID 1
LUN 2 - 2 32 GB DRIVES IN RAID 1
LUN 4 - 5 32 GB DRIVES IN RAID 5
LUN 6 - 5 32 GB DRIVES IN RAID 5
Service Processor 1
LUN 1 - 2 32 GB DRIVES IN RAID 1
LUN 3 - 2 32 GB DRIVES IN RAID 1
LUN 5 - 5 32 GB DRIVES IN RAID 5
LUN 7 - 5 32 GB DRIVES IN RAID 5
My filesystem is set up so that:
LUN 0: Unix Directories and Swap
LUN 2:/u01 (Oracle Executables, control file, etc...)
LUN 1:/u02 (Control File, Online Redo, Rollback Segments)
LUN 3:/u03 (Control File)
LUN 4-7: 4 pieces bound together into one huge partition
which is /u04.
SYSTEM, USERS, TOOLS, TEMP, FACT, DATA, and INDX are all here.
At some point in time, hopefully by February, I will get another terrabyte Clariion array and will be able to move everything off of this array and reconfigure but until then, I'm stuck w/ this arrangement.
Can anyone make suggestions as to how you might spread the data files out to minimize IO problems and include the reasoning behind it?
When you use RAID 5 (/u04 partition), the data file itself spread across various physical disks. How ever TEMP data file need not be in /u04. This can be in /u01.
Also 2nd members of REDO Log files can be configured in /u03.
Given the chance to reconfigure these disks totally, what would you suggest. I am going to be getting a new array in February and will have a chance to redo everything.
I'm working on a solution and will post it for comments when I am finished trying to figure it all out. Basically this is a Data Warehouse. The fact tables obviously take up the majority of space (214GB or so). Everything else is pretty small in comparison. I could probably go totally Raid 0+1
Raid 0+1 would be a good idea. The System, Users and Tools tablespaces probably has a low I/O to begin with. I would probably move them to a RAID 1 disk (to /u03 maybe!!), thus leaving the Data, Index and Fact tablespaces strictly on RAID 5 disks, since those are the most critical tablespaces.
I would not recommend RAID 0+1 for Data Ware System because half of the available disks are used for mirroring. It is a waste.
Wjramsey, could you post here how many physical disks and SCSI controllers that the box has? That would help me to reconfigure your system.
There are currently 2 service processors in the system. There are 2 fiber channel controllers per block for a total of four controllers but only 2 are used (the others are for failover). Each SP has 4 LUN's assigned to it in the following configuration.
LUN0: 2 Drives in a Raid 1 configuration
LUN2: 2 Drives in a Raid 1 configuration
LUN4: 5 Drives in a Raid 5 configuration
LUN6: 5 Drives in a Raid 5 configuration
For a total of 14 drives
LUN1: 2 Drives in a Raid 1 configuration
LUN3: 2 Drives in a Raid 1 configuration
LUN5: 5 Drives in a Raid 5 configuration
LUN7: 5 Drives in a Raid 5 configuration
For a total of 14 drives
That means 28-32GB drives and 2 additional hot spares for 30 total drives.
The disk was originally set up w/ 64K stripe but supports 4,16, 64, 128, and 256K stripes.
You can have between 3 and 16 drives in either of the Raid 0 or Raid 5 configurations.
There is one fiber channel arbitrated loop per storage processor w/ each loop having 100MB/s of throughput for 200MB/s total throughput.
The rotational speed of the drives are, of course, 10000RPM. There is a 1MB data buffer. The buffer to media transfer rates fall between 21.1 and 36.8 MB/s. Access Times are 5.7ms read and 6.5ms write. Rotational Latency is 2.99ms.
Let me know if you need any further information. I know that it wouln't be totally Raid 10 - that temp doesn't need to have redundancy and such issues. I'm just wondering what combination of Raid Levels to stick out there to optimize.
Is that enough information *laugh*
I don't even know that much information about myself let alone my hard drive configuration... you know your stuff... Aside of the serial numbers on the drives... what other information would you possibly be able to supply...
Let me work out, I will reply my answer very soon.
How ever, I have a question:
You said "The buffer to media transfer rates fall between 21.1 and 36.8 MB/s. ".
What about the disk transfer rate? Is it same as 21.1 MB to 36.8 MB/sec?
The following configuration may help you to get best performance from the available disks:
Assumptions carefully considered:
1. Degree of Striping and Mirroring
2. Number of Available disks = 30
3. Data Ware Application with many users that require mostly unique scans on its tables or indexes
4. 2 Spare disks are also used for Striping
5. 2 Controllers each with 100MB bandwidth
6. 128K or 256K stripe would be better than 64K.
Volume Name,Number of disks, Striping, Mirroring, Used for
U01 2 No Yes Unix executables, Swap
U02 2 No Yes Oracle executables, REDO LOG Members A
U03 2 No Yes REDO LOG Members B
U04 5 Yes, Degree of Striping 4 No Index-1 Tablespace
U05 9 Yes, Degree of Striping 8 No Data-1 Tablespace
U06 5 Yes, Degree of Striping 4 No System, and Temporary Tablespaces
U07 5 Yes, Degree of Striping 4 No Rollback Tablespace
Note 1:Reason for using 9 disks in u05:
As you said the minimum transfer rate from buffer to media is around 21 MB/Sec, you can very well go up to 10 disks (10 disks * 21 MB=210 MB) in one striping. More distribute the data, More Parallel I/O for Reading can be engaged.
If your application is Data Warehouse with few users that require many range/full table scans, then RAID 3 is better option than RAID 5.
If you need more disk space for your DATA, you can use U07 volume. Create Rollback Tablespace in U06.
If your database is in NOARCHIVELOG mode, then you do not need U03. Use those 2 disks as HOT SPARE. However, if the database is n ARCHIVE mode, then you must have 2nd members of REDO LOG.
Why do you need less degree of striping for Index than for DATA.
INDEXES require a lower degree of striping because the data in Indexes are always lesser than data in Tables.
Whatever be the RAID level, Never create INDEX Tablespace in DATA volume. This will definitely hurt performance.