Error in v$sql.rows_processed. 1,000,000 rows per second
I am running oracle 9i on a Unix box ( 32 bit )
I have a small pl/sql procedure that uses a database link in the cursor and a commit command in the body. When I monitor the value of v$sql.rows_processed, that number increases about 1,000,000 rows per second until it reaches the maximum value for a pls_integer, 2,147,483,647 and then it becomes negative 2,147,483,647. The row count increases until it passes zero and then, +2,147,483,647. This cycle will continue based on the number of rows in the queried table (users).
I am looking for a reason this takes place and I have four concerns. What is the actual number of rows I am processing, I do not want a cartesian situation. Does this take place in 10G. Is there a work around. Should I be concerned about the accuracy of the v$ tables.
I posted an SR (TAR 5015075.993) with Oracle on Jan 3 but they have not offered any help.
---------------- The procedural code ------------------
CREATE OR REPLACE PROCEDURE update_monthly_age
sql_string VARCHAR2 ( 200 ) := '';
EXIT WHEN c1%NOTFOUND;
uage := uage
sql_string := 'update monthly_age set months = '
|| ' where userid = '
utab := DBMS_SQL.open_cursor;
DBMS_SQL.parse ( utab,
dtab := DBMS_SQL.EXECUTE ( utab );
DBMS_SQL.close_cursor ( utab );
---------------- Oracle Version Information --------------
Oracle9i Enterprise Edition Release 220.127.116.11.0 - Production
PL/SQL Release 18.104.22.168.0 - Production
CORE 22.214.171.124.0 Production
TNS for Solaris: Version 126.96.36.199.0 - Production
LSRTL Version 188.8.131.52.0 – Production
Last edited by gandolf989; 01-16-2006 at 12:25 PM.
Thanks for your suggestion. In my mind there is no doubt that I am NOT processing that many rows. I have not run the query through tkprof, one reason being I do not know how to pass an entire procedure into tkprof.
If I change the state name from OHIO to TEXAS and remove the commit statement it behaves as expected. I will try the TKPROF suggestion.
One thought, but you SELECT userid , age from users@pidm_ltus25d
If the userid value selected is actually a string containing the text 'userid', then your Dynamic SQL would be something like :
update monthly_age set months = 1234 where userid = userid;
which would update every record in the table.
Gandolf989, DaPi, Gamyes; checked on your suggestions
To Gandolf989, DaPi, Gamyes;I looked at your suggestions and put my comments below. This is still a problem. Could anyone try this is one of your databases?
-- in the remote database
create table well ( entity varchar2(20) , country varchar2(12), province_state varchar2(12));
insert into well select to_char ( object_id ) , 'USA' , '42' from dba_objects where rownum < 10001;
To: Gandolf989. This morning I reran this job in three different data that were taken down for backups over the weekend. To look at your idea of a cartesian join/situtation, I changed the query against v$sql to just "select rows_processed , substr ( sql_text , 1 , 75 ) from v$sql. I am still seeing this absurd number of records processed. In a 10 minute test v$sql said that I processed almost 4 Billlion rows. I have not tried tk_prof yet.
To DaPi: You are totally correct regarding your comments on the lack of the cursor and the construction of the update statements. The original code is 200-300 lines and really does not make any sense to post. This was the smallest example that I could post. In honesty, this problem takes place WITHOUT the update statement and the sql_string assignment but, I put them in to make the example have some meaning. (Please see my new example below; I know the cursor is different but that does not seem to matter). If I move the commit to outside the loop in the cursor, get_well_number, everything works correctly. I need the commit statement because I am processing from 0 to 50,000 rows in the cursor in my code. My appologies on TEXAS and OHIO, that is an earlier example that I did not post.
CREATE OR REPLACE PROCEDURE gather_well_codes
n NUMBER := 1;
TYPE wcode IS TABLE OF well.entity%TYPE
INDEX BY BINARY_INTEGER;
-- Fetch entity for wells in Texas (state code=42)
WHERE country = 'USA'
AND province_state = '42';
n := 0;
EXIT WHEN get_well_numbers%NOTFOUND;
n := n
To Gamyers: I agree with you suggestion for improving the code. Please read what I mentioned to DaPi, just above the example of code in this reply.
Thanks for your ideas. This is such a simple piece of code it is driving me nuts.
Last edited by gandolf989; 01-16-2006 at 12:23 PM.
To Gandolf: Those are two tables in the "remote" database.
What is well@pidm_ltus25d and what is its primary key?
This table contains information about oil and gas wells that are drilled in the U.S. The WELL table has a primary key on the single column, well_identifier.
What is users@pidm_ltus25d and what is its primary key?
Users what just an example table for this posting and is not even used in the last extract of code that I posted. It was the table (username varchar2(30), userid number(8), monthly_age number(8));
From Oracle: I just got a reply from one of their plsql analysts that states the problem is a cartesian join. I do not see any possuible way that join could be in my code, especially since it works when I move the commit statement, and I am waiting for Oracle to confirm that the error is in the 9.2 engine. I have also asked for them to check 10G. I am not sure if Oracle will provide me answer. If you want and have time, I can tar up an ascii file that has the create table statements and script to populate the table and, my code.
In case you want them, here is the script to create the tables.
-- In the remote database create the WELL table
drop table well
create table well ( well_identifier varchar2(20) ,
entity varchar2(40) , operator_name varchar2(20) ,
country varchar2(12) , province_state varchar2(12) )
insert into well
select to_char ( object_id )||'0000' , 'ENT'||to_char ( object_id ) ,
'EXXON OIL AND GAS' , 'USA' , '42'
from all_objects where rownum < 10001
-- user_usa is a private database link to the remote database
-- In the source database create the WELL table and create the
-- procedure. The WELL table is -- only needed for the c_code
-- variable. That just be "c_code varchar2(20) := '';"
create or replace procedure gather_well_codes
n number := 1;
cursor get_well_numbers is
select entity from well@user_usa
where country = 'USA'
and province_state = '42';
n := 0;
fetch get_well_numbers into c_code;
exit when get_well_numbers%notfound;
n := n + 1;
Thanks Eric Milton
Hey, I can reproduce it in 184.108.40.206. and XE (which is a baby version of 10GR2) for your test case.
SELECT entity from well@user_usa where country = 'USA' and province_state = '42'
gives a row_processed of 50015000
I just used a loopback database link (ie connect back to the same database and user).
The 'real' SQL on the 'remote' link gives 10000 rows processed.
SELECT "A1"."ENTITY" FROM "WELL" "A1" WHERE "A1"."COUNTRY"='USA' AND "A1"."PROVINCE_STATE"='42'
Adding the following :
before the close of the cursor shows 10000 processed.
I don't think there's any problem in the code and its safe to assume the rows_processed is crud. Fetches seems correct, and buffer_gets is more reasonable too.
If you want to monitor progress, look into DBMS_APPLICATION_INFO package. You can set the module, action and client_info columns on v$session, and use one of them to show the current get_well_numbers%rowcount value.
PS. They are talking rubbish about cartesian join.