Physical Reads on DWH DB
I'm working on a DWH project, and our IO (physical reads MB/s) seems pretty bad.
We are using a Sunfire 6800 (20 CPUs) connect via two fibre channels to a SAN (sorry I dont have yet much more information about the system).
I checked the physical read values of v$sysstat in a interval of 10 seconds and only got an average of something like 200 MB/s. I got this result while executing simple parallel fts on the DB.
So I was wondering what kind of IO (MB/s) you got on your systems when you excute simple fts against large tables and what kind of server/disk system you are using.
I would be gratefull if some of you could give me some examples so that I got an idear if the IO performance of our server is ok or if I could expect much more of it.
Do you use RAID 1+0?
How big the table is?
How many data files are allocated to the tablespace?
Does the table extents spread across all the datafiles?
- Yes RAID 1+0 is used.
- The segement I used to make this test was a monthly partition having a size of 5 GB.
- There are datafiles allocated for that tbs. All are located on the same raid 1+0 disk unit which is using something like 20 Disks.
- Yes I guess that with an extent size of 16 MB (LMT) the extents of that segment are spread over all files. However because, as I said, the files are all stored on the same raid 1+0 unit I dont think that it would change anything if that segment is spread or not over all datafiles.
The best way to get IO details about your system is tracing level 12.
Set db_file_multiblock_read_count to the highest value.
Alter session set events '10046 trace name context forever, level 12' ;
now you do full table scan on a big table.
In the trace file, look p3 value that shows how many blocks are read in a single IO.
Benchmark it in single process as well as in parallel with 6/8 threads.
Thanks for your feedback.
Finally our system administrator agreed with me. He was, on the OS level, also not able to get more then 200 MB/s. It seems that the source of the problem is located at the controller of the fibre channel on the server. Our administrator will contact some poeple from SUN to check how the problem can be fixed.
So because the problem doesnt come from the DB side I consider it as "solved".
However I got another question:
I have a non partitioned table using a tablespace which has all datafile on the same RAID 01 system (same system as describen before with db_file_multiblock_read_count = 64). I make a fts against that table when no other process is running on the server. why is this fts about 4 times faster when I execute it in parallel degree 4 as when it's not executed in parallel. Is there any reason why the disk system should bring back the data faster when spread over 4 processes as when having only 1 process?
Last edited by mike9; 01-22-2004 at 03:31 AM.
When you have a non-parallel scan,Oracle is just asking the o/s ...
"get me this set of blocks, now get me this set, now get me this set ..."
With parallelism enabled, there are four oracle processes asking for different sets of blocks at the same time. If the i/o subsystem scales, as it would with RAID1+0, then the o/s can meet the demands of the four oracle processes simultaneously.
So it's all really a matter of getting oracle to extract the maximum utility from the i/o system by making multiple demands of it simultaneously on behalf of a single user session.
"db_file_multiblock_read_count = 64 "
I think Oracle would revert back to 32 on Solaris.
Verify this one more time.
For your other question (about parallelism) Slimdave answered it.
Click Here to Expand Forum to Full Width