-
Confirmation from Gamyers
To: Gamyers;
Thanks goodness I am not crazy. I really appreciate your testing and the fact that you ran the test in 10G is "intertesting". It is sort of hard to believe that this problem exists in such a short piece of code and was not caught by Oracle.
Thanks again.
Eric
-
v$sql.rows_processed: Extra processing involved
I recieved confirmation from gamyes that this behavior had been reproduced. Look at those comments I found that in the remote data base the rows_processed is 100% correct.
In a small test it may not be obvious, but I really believe that extra processing is taking place. This morning I ran a query against 2,000,000 rows and the elapse time was 50 minutes. Without the commit statement the elapsed time was 10 minutes.
I think this is a very substantial difference and, I will pester Oracle to get working on my service request.
Eric
-
I was able to run tk_prof and the trace file indicates that this problem really exists. Any ideas out there?
Code:
call count cpu elapsed disk query current rows
------- ------ --------- ----------- ---- ----- -------- -----------
Parse 6 400.00 756.78 1 4 1 0
Execute 10005 43,400.00 192,205.80 0 0 90,003 0
Fetch 10006 32,300.00 56,990.45 2 11 10,276 50,015,004
------- ------ --------- ---------- ---- ---- ------- ----------
total 20017 76,100.00 249,953.03 3 15 100,280 50,015,004
Last edited by gandolf989; 02-01-2006 at 11:13 AM.
-
What is the explain plan???
-
Here's a complete test setup script (excepting the username and password for the DB link).
Code:
drop table over_link
/
create table over_link ( id number ,val varchar2(40) )
/
insert into over_link
select rownum, 'V'||to_char ( rownum )
from all_objects where rownum < 101
/
create database link ll connect to ..... identified by ..... using 'XE';
create or replace procedure test_link is
n number := 1;
c_code varchar2(40);
cursor c_1 is
select val from over_link@ll;
cursor c_stat is
SELECT name||' = '||value descr
FROM V$SESSTAT s, v$statname n
WHERE n.statistic# = s.statistic#
AND name = 'bytes received via SQL*Net from dblink'
AND s.sid = (select distinct sid from v$mystat);
cursor c_rp is
select sql_text, parse_calls, fetches, executions, rows_processed
from v$sql
where upper(sql_text) like '%OVER_LINK%@%'
and upper(sql_text) not like '%V$SQL%';
begin
n := 0;
open c_1;
loop
fetch c_1 into c_code;
exit when c_1%notfound;
n := n + 1;
if (n/10) = trunc(n/10) then
for c in c_stat loop
dbms_output.put_line(n||':'||c.descr);
end loop;
for c in c_rp loop
dbms_output.put_line(n||':'||c.rows_processed);
end loop;
commit;
end if;
end loop;
close c_1;
end test_link;
/
Here's the script for a test run. You won't want to do the flushing of the shared pool on a serious database. A small sandbox install of XE is perfect for this.
If you can't flush the shared pool, the ROWS_PROCESSED results won't be as clear on subsequent runs as the old stats will still be there.
Code:
alter system flush shared_pool;
conn .../....m@xe
set serveroutput on size 1000000
exec test_link;
The results I got were
Code:
10:bytes received via SQL*Net from dblink = 4449
10:10
20:bytes received via SQL*Net from dblink = 5821
20:30
30:bytes received via SQL*Net from dblink = 7171
30:60
40:bytes received via SQL*Net from dblink = 8521
40:100
50:bytes received via SQL*Net from dblink = 9871
50:150
60:bytes received via SQL*Net from dblink = 11221
60:210
70:bytes received via SQL*Net from dblink = 12571
70:280
80:bytes received via SQL*Net from dblink = 13921
80:360
90:bytes received via SQL*Net from dblink = 15271
90:450
100:bytes received via SQL*Net from dblink = 16622
100:550
Note : After every commit, it seems to need to 'go back to the start'. For example, after committing at row 10, when it gets to row 20, it has counted 30 (ie the original 10 plus 20 'new'), and for the next one, it has the thirty it has already done, plus 30 'new'.
However we are NOT getting those rows resent over the DB link.
I don't think you are getting the 'wrong' result. It probably isn't performing as well as it could, but that is probably more because Row by Row processing is slow. At the very least it should be using bulk selects.
-
Gamyers and SQL*net
Gamyers:
Thanks for you time and information. I did not run any test that looked at the traffic over sqlnet. I thnk that you are correct on the internal counting getting messed up. My son figured out that the value is rows_processed can be calculated by the following ( ( rows in table +1 ) * ( rows in table / 2 ) ) + rows in table. So, 10000 rows gives us ( 10001 * 5000 ) + 10000.
A far as the explain plan, I will have to research how to generate that when using a procedure.
Eric
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|