DBAsupport.com Forums - Powered by vBulletin
Page 1 of 2 12 LastLast
Results 1 to 10 of 11

Thread: Shared server memory usage

  1. #1
    Join Date
    Dec 2001
    Posts
    337

    Shared server memory usage

    Hi all,

    We have Oracle 10.1.0.3 intalled on a Redhat linux box. The server has 4GB of physical memory. Oracle has been configured to shared servers (wasnt configured by me, i would used dedicated servers). We have 25 shared servers. Now when I do a top i see:

    Code:
    195 processes: 190 sleeping, 3 running, 2 zombie, 0 stopped
    CPU states:  cpu    user    nice  system    irq  softirq  iowait    idle
               total    4.1%    0.0%   65.2%   0.0%     0.7%   29.0%    0.7%
               cpu00    3.7%    0.0%   64.6%   0.0%     1.5%   28.5%    1.5%
               cpu01    4.5%    0.0%   65.9%   0.0%     0.0%   29.5%    0.0%
    Mem:  3921888k av, 3899804k used,   22084k free,       0k shrd,    7960k buff
                       2963208k actv,  566312k in_d,   55244k in_cK
    Swap: 2044072k av,    5168k used, 2038904k free                 3408592k cached
    0m
    7m  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
    29453 oracle    16   0 1110M 1.1G 1107M R    28.6 28.9  12:38   0 oracle
        8 root      16   0     0    0     0 SW   28.2  0.0 218:07   0 kscand
        7 root      16   0     0    0     0 SW    3.0  0.0 212:15   1 kswapd
    29455 oracle    15   0 1041M 1.0G 1030M R     2.2 27.0   4:19   0 oracle
    29449 oracle    15   0 1136M 1.1G 1124M D     1.8 29.5  33:03   0 oracle
    25029 root      20   0  1244 1244   880 R     1.5  0.0   0:00   0 top
    29451 oracle    15   0 1124M 1.1G 1112M D     1.1 29.2  35:58   1 oracle
    29457 oracle    15   0 1022M 1.0G 1019M D     0.7 26.6   4:06   0 oracle
    As can be seen, most of the memory is been used by oracle processes, these were traced to shared processes (some using 1.1G!). Besides changing to dedicated mode, what areas need to looked in order to reduce this overhead?
    SGA_TARGET is set to 1.4GB, and PGA_AGGREGATE=486MB.

    Any advice will be much appreciated,

    Thanks in advance,
    Chucks
    Last edited by davey23uk; 05-02-2007 at 12:01 PM.

  2. #2
    Join Date
    Nov 2005
    Posts
    32
    The process size that an utility like top shows is not the actual size of heap (data). It is a combination of (a) SGA size + (b) oracle executable being used by the process + (c) the actual heap (data). On sun boxes, you can verify the actual size of the heap portion of the process through the use of pmap command. For example :

    On a solaris10 box, here's the output from top:

    PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
    9177 ora9i 1 1 0 1597M 1569M sleep 45:04 0.03% oracle

    As you can see, the process size is listed by top at almost 1.6GB.

    Doing pmap :

    $ pmap -x 9177

    9177: ora_pmon_QADB
    Address Kbytes RSS Anon Locked Mode Mapped File
    00010000 49744 48896 - - r-x-- oracle <<- Executable
    030B2000 496 344 56 - rwx-- oracle <<- Executable
    0312E000 16 8 8 - rwx-- oracle <<- Executable
    03132000 1512 632 632 - rwx-- [ heap ] <<- Private Data
    20000000 1568768 1568768 - 1568768 rwxsR [ ism shmid=0x55 ] <<- SGA
    FE720000 240 216 - - r-x-- libresolv.so.2
    FE76C000 16 16 - - rwx-- libresolv.so.2
    FE780000 2392 2096 - - r-x-- libvas.so.4.2.0
    ..............
    ..............

    From PMAP, the actual private heap / data size is only 1.4 MB.

    Translating the SGA address from the pmap output (ism shmid=0x55 ): 0x55 translates to 85

    $ ipcs -m

    Shared Memory:
    m 85 0xdc126148 --rw-r----- ora9i software

    ipcs -a should tell you as to how much oracle's allocated for SGA.

    As you can see, the process size of top is not a good indication of the heap portion of the process.

    You are better off relying on the UGA and PGA statistics from v$sesstat to see if you are using excessive sql work area memory.

    Until 9i the automatic tuning of work areas was disabled when you use shared servers since most of the work area allocations that are part of the run time memory were handled within SGA for shared servers. Starting 10G work area allocations that are part of the run time memory is handled within PGA and so large sort operation or hash joins or windowing operation on resultsets can potentially cause your private heap size to grow.

    Good luck.......

    http://www.dbaxchange.com

  3. #3
    Join Date
    Dec 2001
    Posts
    337
    Hi,

    Thank you very much for the very informative feedback. I can understand that because of the changes in 10g, large sorts etc can take place within the PGA however can point me in the direction of what exactly to look for when i check v$sessat?

    I also noticed that there are 25 shared servers allocated and all of these start up with the instance, is there a gain in reducing the number to shared servers? Along with this there is only one dispatcher allocated. Surely this needs to increase if shared servers were to remain at 25?

    Am new to shared servers so any advice will be greatly apprecaited.

    Thanks,
    Chucks

  4. #4
    Join Date
    Nov 2005
    Posts
    32
    UGA and PGA checks:

    (1) You can do this to check UGA and PGA allocations for a given session:

    select a.name,b.value,b.sid,username
    from v$statname a,v$sesstat b,v$session
    where a.statistic#=b.statistic# and b.sid = v$session.sid
    and b.sid=&sidvalue and a.name like '%ga%memory%'

    or

    (2) To just check PGA allocations for a given session:

    select PGA_USED_MEM,PGA_ALLOC_MEM,PGA_FREEABLE_MEM,PGA_MAX_MEM from v$process,v$session where sid = &sidvalue and addr=paddr

    Also to keep in mind is the fact that database sessions that do direct path reads due to Sort IO (when a sort does not fit in memory) or uses parallel Query slaves or performs I/O to LOB segments (which are not cached in the Buffer Cache), add to the PGA's growth which in turn can affect the process size.

    "I also noticed that there are 25 shared servers allocated and all of these start up with the instance, is there a gain in reducing the number to shared servers?"

    I guess it depends on the load / concurrent connections to your database. You could always control initial shared server process startup Vs. the total number through the use of shared_servers and max_shared_servers parameters.

    This is oracle's recommendation for the number of dispatchers:

    "The value of MAX_DISPATCHERS should at least equal the maximum number of concurrent sessions divided by the number of connections for each dispatcher. For most systems, a value of 250 connections for each dispatcher provides good performance. "

    You might also want to take a look at 10g's Automatic Shared Server Configuration feature (metalink note 265931.1).

    Good luck......

    http://www.dbaxchange.com

  5. #5
    Join Date
    Dec 2001
    Posts
    337
    Hi there,

    Thank you very much for the feedback. I have gone deeper into the database to investigate this high CPU usage. I ran several AWR reports and that the execute to parse ratio is only 3%. I checked out several queries that were being run (and that are consistently run throughout the day) that most likely sap all the CPU. One such query had a cost of 141913! It was performing 4 full table scans (out of which one table is 37 million rows!). Altough the cost was higher on the hash joins it the query was trying to perform. Any advice on reducing this (generally)?

    I also noticed that stats were not update to on any of the tables (e.g the stats for the table which had 37 mill rows dated back 3 weeks). As a first point of action i have suggest an analyze on the schema.

    It seems like that this ineffiecient SQL code is the reason for this high CPU usage.

    I also have a high number of waits on db file sequential read:

    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (s) (ms) /txn
    -----------------------------------------------------------------------
    db file sequential read 3,292,788 8,545 37.10 User I/O

    Is this due to ineff sql as well due to the high amounts of i/o produced by these queries?


    Thanks in advance,
    Chucks

  6. #6
    Join Date
    Nov 2005
    Posts
    32
    Having a low execute to parse ratio depends on the type of application being run against the database. If you have OLTP type of an applications that parses a statement and re-executes it numerous times within the session then this ratio will tend to be higher. On the other hand if you have a batch processing system that parses, executes and does not re-execute the same statement then this ratio will likely be low. I would concentrate on soft parse % (which will give us an idea of the amount of hard parse) and Non-parse CPU% which could tell you as to how much CPU time was actually spent on parsing.

    High CPU usage might be an indicative of a different problem where in some cases, the process might be actually waiting on the IO (full scan) to complete.

    Something must be causing oracle to favor hash joins. Could be that all of your tables that the optimizer's doing full scans on (incl. the 37 mill rows table) might have parallelism enabled and not have large number of blocks or the outer cardinality of the join might be high.

    Nested Loop Join cost = outer access cost + (inner access cost * outer cardinality)

    Vs.

    Hash join cost = (outer access cost * # of hash partitions) + inner access cost

    You could always enable the 10053 event for the session to see as to why oracle arrived at the plan it did.

    Hope you've enabled monitoring on the tables so that stats are collected by oracle only if a certain percentage of data has changed. Frequent collection of stats can cause plan instability.

    Narrow down on the actual objects where the sequential reads are happening. You could query active session history (ASH) to get the specific details. 99% of the time this will point to tweaking the execution plans with having the optimizer pick the right indexes for the sql statement.

    Good Luck......

    http://www.dbaxchange.com

  7. #7
    Join Date
    Dec 2001
    Posts
    337
    Hi there,

    I have been looking at the soft parse figures and that points to 99.97%. This is near enough the target (which probably means there is no much hard parsing going on?). Non-parse CPU% is 96.15 which is quite high. This indicates Oracle utilizes the CPU mostly for statement execution but not for parsing. Hence, you were right in what you said about 'High CPU usage might be an indicative of a different problem where in some cases, the process might be actually waiting on the IO (full scan) to complete'


    This is one of the problem sql (which is part of the a search utility) along with its execution path:

    SELECT "CAMERAGROUP"."GROUPNAME", "BCAPTURE"."CAPTUREDATE", "CAMERAGROUP"."URN", "CAMERA"."URN", "CAMERA"."SHORTNAME", "BCAPTURE"."URN", "BCAPTURE"."VRM", "CAMERAGROUP"."DELETED", "CAMERAGROUPCAMERA"."CAMERAGROUPURN", "CAMERA"."FEEDIDENTIFIER", "CAMERA"."SOURCEIDENTIFIER", "CAMERA"."CAMERAID", "CAMERA"."DELETED", "HOTLISTMATCH"."URN", "HOTLISTMATCH"."VRM"
    FROM "BOF2"."CAMERAGROUP" "CAMERAGROUP", "BOF2"."CAMERAGROUPCAMERA" "CAMERAGROUPCAMERA", "BOF2"."CAMERA" "CAMERA", "BOF2"."BCAPTURE" "BCAPTURE", "BOF2"."HOTLISTMATCH" "HOTLISTMATCH"
    WHERE ("CAMERAGROUP"."URN"="CAMERAGROUPCAMERA"."CAMERAGROUPURN" (+))
    AND ("CAMERAGROUPCAMERA"."CAMERAURN"="CAMERA"."URN" (+))
    AND ((("CAMERA"."FEEDIDENTIFIER"="BCAPTURE"."FEEDIDENTIFIER" (+))
    AND ("CAMERA"."SOURCEIDENTIFIER"="BCAPTURE"."SOURCEIDENTIFIER" (+)))
    AND ("CAMERA"."CAMERAID"="BCAPTURE"."CAMERAIDENTIFIER" (+)))
    AND ("BCAPTURE"."URN"="HOTLISTMATCH"."CAPTUREID" (+))
    AND ("BCAPTURE"."CAPTUREDATE") >=TO_DATE ('19-03-2007','dd-mm-yyyy');

    15771400 rows selected.


    Execution Plan
    ----------------------------------------------------------
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=143079 Card=222252
    Bytes=31559784)

    1 0 HASH JOIN (OUTER) (Cost=143079 Card=222252 Bytes=31559784)
    2 1 HASH JOIN (Cost=107774 Card=222252 Bytes=27114744)
    3 2 HASH JOIN (OUTER) (Cost=10 Card=237 Bytes=20856)
    4 3 HASH JOIN (OUTER) (Cost=5 Card=237 Bytes=10191)
    5 4 TABLE ACCESS (FULL) OF 'CAMERAGROUP' (TABLE) (Cost
    =3 Card=31 Bytes=1054)

    6 4 INDEX (FULL SCAN) OF 'SYS_C004035' (INDEX (UNIQUE)
    ) (Cost=1 Card=237 Bytes=2133)

    7 3 TABLE ACCESS (FULL) OF 'CAMERA' (TABLE) (Cost=5 Card
    =278 Bytes=12510)

    8 2 TABLE ACCESS (FULL) OF 'BCAPTURE' (TABLE) (Cost=107
    685 Card=4726370 Bytes=160696580)

    9 1 TABLE ACCESS (FULL) OF 'HOTLISTMATCH' (TABLE) (Cost=3231
    5 Card=1016632 Bytes=20332640)





    Statistics
    ----------------------------------------------------------
    1764 recursive calls
    0 db block gets
    666466 consistent gets
    855697 physical reads
    0 redo size
    1140230272 bytes sent via SQL*Net to client
    11566198 bytes received via SQL*Net from client
    1051428 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    15771400 rows processed

    As can be seen its generating a high cost, particaularly when performing the hash joins the one hash join (takes up most of the cost!). From what you say, is this taking the huge table into memory and that is what is resulting in the high cost?

    Am going to analze data using dbms_stats. Am thinking that the frequancy should be once a week.

    Also for shared servers, is there a parameter setting whereby i can say and n number of processes will be able to use a shared server?

    Thanks for much for your feedback.
    Chucks

  8. #8
    Join Date
    Dec 2001
    Posts
    337
    Hi,

    Just an update: I ran an analyze on the schema and re-ran the query

    The execution plan is below:

    Execution Plan
    ----------------------------------------------------------
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=152723 Card=364221
    Bytes=51355161)

    1 0 HASH JOIN (RIGHT OUTER) (Cost=152723 Card=364221 Bytes=513
    55161)

    2 1 TABLE ACCESS (FULL) OF 'HOTLISTMATCH' (TABLE) (Cost=3298
    9 Card=1031856 Bytes=20637120)

    3 1 HASH JOIN (Cost=115838 Card=364221 Bytes=44070741)
    4 3 HASH JOIN (OUTER) (Cost=10 Card=237 Bytes=20619)
    5 4 HASH JOIN (OUTER) (Cost=5 Card=237 Bytes=9954)
    6 5 TABLE ACCESS (FULL) OF 'CAMERAGROUP' (TABLE) (Cost
    =3 Card=33 Bytes=1089)

    7 5 INDEX (FULL SCAN) OF 'SYS_C004035' (INDEX (UNIQUE)
    ) (Cost=1 Card=237 Bytes=2133)

    8 4 TABLE ACCESS (FULL) OF 'CAMERA' (TABLE) (Cost=5 Card
    =283 Bytes=12735)

    9 3 TABLE ACCESS (FULL) OF 'BCAPTURE' (TABLE) (Cost=115
    711 Card=7050830 Bytes=239728220)





    Statistics
    ----------------------------------------------------------
    1751 recursive calls
    0 db block gets
    667153 consistent gets
    798674 physical reads
    0 redo size
    1156230391 bytes sent via SQL*Net to client
    11611496 bytes received via SQL*Net from client
    1055546 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    15833169 rows processed

    As can be seen the cost has gone up along with the execution plan changing as well!! The query took 4 mins longer to complete. Alto there were less physical reads the no of rows processed increased. Is the higher cost/longer time due to the fact that running the stats made sure all the stats were up to date and hence Oracle taking more time as data volume had grown since the last valid analyze? Any advice will be greatly appreciated on why Oracle has decided to take a different and slower route will be much appreciated

    Thanks in advance,
    Chucks
    Last edited by Chucks_k; 05-08-2007 at 09:40 AM.

  9. #9
    Join Date
    Nov 2005
    Posts
    32
    Hash join builds hash tables in memory (...and disk) which is a collection of hash buckets or partitions. The number of buckets in memory used to be dictated by hash_area_size but with the introduction of pga_aggregate_target, oracle automatically controls how much of hash memory is allocated within a session's work area. In a idle system, it can be generous so that it can accomodate more hash partitions but oracle always chooses the smaller of the tables or resultsets to build the hash partitions in memory and also ensures that only the smaller buckets are kept in memory while the rest are in the temp area on disks.
    And so yes, if oracle through its automatic allocation allocated large hash area for the session then your process size can grow since there might potentially be more hash buckets in memory. Based on the limited information from the explain plan output from autotrace it looks like oracle's building hash tables on these:

    (1) cameragroup (33 rows)
    (2) Result rowset of cameragroup and sys_c004035 (237 rows)
    (3) result rowset of join between camera and (2) above (237 rows)
    (4) result rowset of join between bcapture and (3) above (365221 rows)

    I would be curious to see as to what the behaviour of the query (execution plan) is with workarea_size_policy disabled for your session and hash_area_size set to a very low number (100 or 200K).

    Coming to your query below, with the amount of hash joins and full scans involved, what surprises me is the fact that I don't see any parallel query processes being used. Are they not enabled or did the database have the max parallel query processes already in use when you ran the query?
    Also at a high level, what are the columns within bcapture and hotlistmatch tables that are defined as not null and have unique values in them? Are there indexes on those columns and are those columns being used in the query?
    .....num_rows and blocks values for the tables involved in the query and an output from the explain plan using dbms_xplan.display would also help.

    Good luck......

    http://www.dbaxchange.com

  10. #10
    Join Date
    Dec 2001
    Posts
    337
    Hi there,

    Thank for your response. I do not think parallel queries have been enabled. I know one can enable it at table level. In this scenario, is it best to enable it on tables where there are full tbs scans + high number of rows? and to what degree? I am also wondering if there is an overhead associated with this considering that we are already utilizing near maximum memory?

    This is the explain plan (using dbms_xplain) for that particalur query:

    SQL> explain plan for
    2 SELECT "CAMERAGROUP"."GROUPNAME", "BCAPTURE"."CAPTUREDATE", "CAMERAGROUP"."URN", "CAMERA"."URN", "CAMERA"."SHORTNAME", "BCAPTURE"."URN", "BCAPTURE"."VRM", "CAMERAGROUP"."DELETED", "CAMERAGROUPCAMERA"."CAMERAGROUPURN", "CAMERA"."FEEDIDENTIFIER", "CAMERA"."SOURCEIDENTIFIER", "CAMERA"."CAMERAID", "CAMERA"."DELETED", "HOTLISTMATCH"."URN", "HOTLISTMATCH"."VRM"
    FROM "BOF2"."CAMERAGROUP" "CAMERAGROUP", "BOF2"."CAMERAGROUPCAMERA" "CAMERAGROUPCAMERA", "BOF2"."CAMERA" "CAMERA", "BOF2"."BCAPTURE" "BCAPTURE", "BOF2"."HOTLISTMATCH" "HOTLISTMATCH"
    WHERE ("CAMERAGROUP"."URN"="CAMERAGROUPCAMERA"."CAMERAGROUPURN" (+))
    AND ("CAMERAGROUPCAMERA"."CAMERAURN"="CAMERA"."URN" (+))
    3 4 5 6 AND ((("CAMERA"."FEEDIDENTIFIER"="BCAPTURE"."FEEDIDENTIFIER" (+))
    7 AND ("CAMERA"."SOURCEIDENTIFIER"="BCAPTURE"."SOURCEIDENTIFIER" (+)))
    8 AND ("CAMERA"."CAMERAID"="BCAPTURE"."CAMERAIDENTIFIER" (+)))
    9 AND ("BCAPTURE"."URN"="HOTLISTMATCH"."CAPTUREID" (+))
    10 AND ("BCAPTURE"."CAPTUREDATE") >=TO_DATE ('19-03-2007','dd-mm-yyyy');

    Explained.

    SQL> select * from table(dbms_xplan.display);

    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------
    Plan hash value: 3661953572

    --------------------------------------------------------------------------------
    --------------

    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CP
    U)| Time |

    --------------------------------------------------------------------------------
    --------------


    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | 364K| 48M| | 152K (
    2)| 00:30:33 |

    |* 1 | HASH JOIN RIGHT OUTER| | 364K| 48M| 31M| 152K (
    2)| 00:30:33 |

    | 2 | TABLE ACCESS FULL | HOTLISTMATCH | 1031K| 19M| | 32989 (
    1)| 00:06:36 |

    |* 3 | HASH JOIN | | 364K| 42M| | 115K (
    3)| 00:23:11 |

    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------

    |* 4 | HASH JOIN OUTER | | 237 | 20619 | | 10 (1
    0)| 00:00:01 |

    |* 5 | HASH JOIN OUTER | | 237 | 9954 | | 5 (2
    0)| 00:00:01 |

    | 6 | TABLE ACCESS FULL| CAMERAGROUP | 33 | 1089 | | 3 (
    0)| 00:00:01 |

    | 7 | INDEX FULL SCAN | SYS_C004035 | 237 | 2133 | | 1 (

    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------
    0)| 00:00:01 |

    | 8 | TABLE ACCESS FULL | CAMERA | 283 | 12735 | | 5 (
    0)| 00:00:01 |

    |* 9 | TABLE ACCESS FULL | BCAPTURE | 7050K| 228M| | 115K (
    3)| 00:23:09 |

    --------------------------------------------------------------------------------
    --------------


    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------

    Predicate Information (identified by operation id):
    ---------------------------------------------------

    1 - access("BCAPTURE"."URN"="HOTLISTMATCH"."CAPTUREID"(+))
    3 - access("CAMERA"."FEEDIDENTIFIER"="BCAPTURE"."FEEDIDENTIFIER" AND
    "CAMERA"."SOURCEIDENTIFIER"="BCAPTURE"."SOURCEIDENTIFIER" AND
    "CAMERA"."CAMERAID"="BCAPTURE"."CAMERAIDENTIFIER")
    4 - access("CAMERAGROUPCAMERA"."CAMERAURN"="CAMERA"."URN"(+))
    5 - access("CAMERAGROUP"."URN"="CAMERAGROUPCAMERA"."CAMERAGROUPURN"(+))
    9 - filter("BCAPTURE"."CAPTUREDATE">=TO_DATE('2007-03-19 00:00:00', 'yyyy-

    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------
    mm-ddhh24:mi:ss'))

    28 rows selected.

    SQL> select count(*) from BCAPTURE;

    COUNT(*)
    ----------
    38408590

    SQL> select count(*) from hotlistmatch;

    COUNT(*)
    ----------
    1034996

    SQL> select num_rows,blocks,degree from user_tables where table_name='BCAPTURE';

    NUM_ROWS BLOCKS DEGREE
    ---------- ---------- ----------
    38287440 517657 1

    SQL> select num_rows,blocks,degree from user_tables where table_name='HOTLISTMATCH';

    NUM_ROWS BLOCKS DEGREE
    ---------- ---------- ----------
    1031856 149892 1

    As you can see the bcapture table has grown by another 200k since i ran the last analyze yesterday. The xplain plan shows the huge amount of cpu used when doing the full tbs scans and also the time taken ( how accurate is this as when i ran the query yesterday it took about 6-7 mins in total). I have been speaking to the developers and they have mentioned that they have optimised most of these queries and switched to dedicated server mode for the next version of the application. However, to upgrade to this they need to run a verication check on the current dataset. Because of the high cpu usage this is taking too long and is bound to fall over. Now the goal is to reduce the cpu load and apart from not running these heavy searches i have these ideas:

    1/ set shared servers to 5 and max_servers to 25. This would mean only 5 are started when the db is started. Reducing overhead?
    2/ Set cursor_sharing to similar. They have some queries in the current version using literals.
    3/ Explore adding indexes to the bcapture table. Investigating now.

    Any other ideas/advice will be highly appreciated.

    Thanks for all your help so far.

    Chucks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width