-
Cache large table !!
Can someone tell me if the following procedure can have any adverse affect on my production DB.
I have 10 Gig free Ram on the production server. One of my stored procedure runs once a year and it runs for 3 days. This procedure updates a lot of tables and does a lot of other things. Inorder to imporve the performance I have exhausted all efforts as far as indexing, hints etc etc. Since I have all this extra memory sitting on the server idle I am thinking of caching one of the table which is used throught this stored procedure. This table is about 1.5 Gig and if I cache it could there be any adverse affect to the overall performance of the database. Also is it better to use the recycle_pool or the keep_pool to cache this large table. In some of the articles I read for large table always use the recycle_pool.
Also can someone recommed how best to use the extra memory if this server is running onlt the database and nothing else.
-
Proc is running 3 days.......
Then some thing is really wrong...
Your company needs a solid developer and a perf tuning expert to solve this problem.
Regarding the caching a big table, unless you reduce the number of LIOs, caching a big table is not going to change the run time drastically.
Are the physical IOs on the big table so high?
Do you have tkprof output?
Tamil
-
Hmmm, I'd not be surprised to find that it is just one or two queries that are responsible for the vast majority of that time -- have you isolated which ones might be problematic?
also, have you checked that bind variables are being used, and that rows are processed in bulk not individually?
-
check out if updates are eating up your resources..
Abhay.
funky...
"I Dont Want To Follow A Path, I would Rather Go Where There Is No Path And Leave A Trail."
"Ego is the worst thing many have, try to overcome it & you will be the best, if not good, person on this earth"
-
Originally posted by abhaysk
check out if updates are eating up your resources..
Abhay.
It is the updates which are taking all the time. Pretty much updating the entire table row by row. Two of the tables that are being updated have 47 million and 100 million records in it. It runs once a year and I cannot test these scripts on test db either.
There is one table which has about 1.5 million record and is being used in this stored procedure and it does a full table scan on it. I am thinking of pining this table to memory(keep_pool).
No there are no bind variables used in this SP.
I don't have tkprof output for this one but planning to keep it when I run it for this year.
GETTING TO THE POINT.
Does anyone see any adverse affect on the database if I pin a huge table either in the Keep_pool or recycle_pool. I think the same that pinning a big table will not buy a lot.....but well the management wants to use the extra memory on the server and on this..
-
I AM NOT EXPECTING ANYONE TO LOOK DEEP INTO THE CODE BUT FOLLOWING IS ONE OF THE STORED PROCEDURE USED IN THIS PACKAGE THAT RUNS FOR 3 DAYS AND IS THE ONE THAT TAKES MOST TIME. JUST FYI...
AS
PROCEDURE RESET_ADVOCACY_DATA IS
v_records_updated integer;
v_total_records_updated integer;
starttime varchar2 (50);
endtime varchar2 (50);
CURSOR GET_S_CONTACT_REC IS
select row_id
from S_CONTACT
where (x_const_sub_type != 'Legislator' or x_const_sub_type is null);
S_CONTACT_REC GET_S_CONTACT_REC%ROWTYPE;
BEGIN
COMMIT;
starttime := to_char(sysdate,'MM/DD/YYYY HH:MM:SS');
OPEN GET_S_CONTACT_REC;
v_records_updated := 0;
v_total_records_updated :=0;
LOOP
FETCH GET_S_CONTACT_REC INTO S_CONTACT_REC;
EXIT WHEN GET_S_CONTACT_REC%NOTFOUND;
update S_CONTACT
set x_cl_cong_dist = 'No Match',
x_cl_st_hse_dist = 'No Match',
x_cl_st_sen_dist = 'No Match',
x_cl_state = 'No Match'
where S_CONTACT.row_id = S_CONTACT_REC.row_id;
v_records_updated := v_records_updated + SQL%ROWCOUNT;
v_total_records_updated := v_total_records_updated + SQL%ROWCOUNT;
IF v_records_updated >= 5000 THEN
COMMIT;
v_records_updated := 0;
END IF;
END LOOP;
COMMIT;
CLOSE GET_S_CONTACT_REC;
endtime := to_char(sysdate,'MM/DD/YYYY HH:MM:SS');
DBMS_OUTPUT.PUT_LINE ('StartTime ' || starttime);
DBMS_OUTPUT.PUT_LINE ('EndTime ' || endtime);
DBMS_OUTPUT.PUT_LINE ('Records Updated ' || v_total_records_updated);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE ('Error Number : ' || SQLCODE || 'Error Description : ' || SQLERRM);
ROLLBACK;
END RESET_ADVOCACY_DATA;
PROCEDURE SET_ADVOCACY_DATA IS
--NOTE !!!!: Before you run SET_ADVOCACY_DATA, please remember to create the table temp_lds_addr_ids
-- as per the below sql
-- create table temp_lds_addr_ids nologging as
-- select zipcode, per_id, row_id
-- from s_addr_per a
-- where rownum < 1
-- Also, create the below indexes
-- create index temp_lds_addr_ids_1 on temp_lds_addr_ids(row_id)
-- variable to check if temp_lds_addr_ids has any records
numrecs integer;
--counter variables
v_records_updated integer;
v_total_records_updated integer;
--time variables
currenttime varchar2 (50);
starttime varchar2 (50);
endtime varchar2 (50);
--data variables
vConDist varchar2(30);
vStHseDist varchar2(30);
vStSenDist varchar2(30);
vState varchar2(30);
-- 10/15/2003 YAB changed cursor to look at temp table instead of s_addr_per
CURSOR GET_S_ADDR_PER_REC IS
select zipcode, per_id, row_id
from temp_lds_addr_ids;
S_ADDR_PER_REC GET_S_ADDR_PER_REC%ROWTYPE;
BEGIN
COMMIT;
starttime := to_char(sysdate,'MM/DD/YYYY HH:MM:SS');
select count(*) into numrecs from temp_lds_addr_ids;
if numrecs = 0 then
execute immediate 'truncate table temp_lds_addr_ids';
commit;
currenttime := to_char(sysdate,'MM/DD/YYYY HH:MM:SS');
DBMS_OUTPUT.PUT_LINE ('Inserting records into temp_lds_addr_ids. Time: ' || currenttime);
insert /*+ append */ into temp_lds_addr_ids
select /*+ parallel (a, 4) */ zipcode, per_id, row_id
from s_addr_per a
where addr_type_cd = 'Home' AND LENGTH(zipcode) >= 9 AND
((substr(zipcode,1,5) BETWEEN '00000' AND '99999') AND
(substr(zipcode, -4,4) BETWEEN '0000' AND '9999'));
commit;
currenttime := to_char(sysdate,'MM/DD/YYYY HH:MM:SS');
DBMS_OUTPUT.PUT_LINE ('temp_lds_addr_ids build completed. Time: ' || currenttime);
end if;
OPEN GET_S_ADDR_PER_REC;
v_records_updated := 0;
v_total_records_updated :=0;
LOOP
FETCH GET_S_ADDR_PER_REC INTO S_ADDR_PER_REC;
EXIT WHEN GET_S_ADDR_PER_REC%NOTFOUND;
BEGIN
select cd,hd,sd,st
into vConDist,vStHseDist,vStSenDist,vState
from cx_zip_district
where zip5 = substr(S_ADDR_PER_REC.ZIPCODE,1,5) and
zip4start <= SUBSTR(S_ADDR_PER_REC.ZIPCODE,-4,4) and
zip4end >= SUBSTR(S_ADDR_PER_REC.ZIPCODE,-4,4);
EXCEPTION
WHEN NO_DATA_FOUND THEN
BEGIN
vConDist := 'No Match';
vStHseDist := 'No Match';
vStSenDist := 'No Match';
vState := 'No Match';
END;
END;
BEGIN
update s_contact
set x_cl_cong_dist = vConDist,
x_cl_st_hse_dist = vStHseDist,
x_cl_st_sen_dist = vStSenDist,
x_cl_state = vState
where row_id = S_ADDR_PER_REC.per_id and con_cd = 'Individual';
delete from temp_lds_addr_ids where row_id=S_ADDR_PER_REC.row_id;
v_records_updated := v_records_updated + SQL%ROWCOUNT;
v_total_records_updated := v_total_records_updated + SQL%ROWCOUNT;
END;
IF v_records_updated >= 5000 THEN
COMMIT;
v_records_updated := 0;
END IF;
END LOOP;
COMMIT;
CLOSE GET_S_ADDR_PER_REC;
endtime := to_char(sysdate,'MM/DD/YYYY HH:MM:SS');
DBMS_OUTPUT.PUT_LINE ('StartTime ' || starttime);
DBMS_OUTPUT.PUT_LINE ('EndTime ' || endtime);
DBMS_OUTPUT.PUT_LINE ('Records Updated ' || v_total_records_updated);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE (S_ADDR_PER_REC.ZIPCODE||'Error Number : ' || SQLCODE || ' Error Description : ' || SQLERRM);
ROLLBACK;
END SET_ADVOCACY_DATA;
END advocacy_pkg;
-
-
Originally posted by dbasupuser
Does anyone see any adverse affect on the database if I pin a huge table either in the Keep_pool or recycle_pool. I think the same that pinning a big table will not buy a lot.....but well the management wants to use the extra memory on the server and on this..
What's the point? If you are continually scanning this table it's probably in cache anyway provided you have a big enough cache. Your management is clueless.
Jeff Hunter
-
I bet this code won't take three days to run once you debug and tweak it for your environment.
Code:
CREATE OR REPLACE PACKAGE BODY mypackage
AS
PROCEDURE reset_advocacy_data
IS
v_records_updated BINARY_INTEGER := 0;
starttime VARCHAR2( 50 )
:= TO_CHAR( SYSDATE, 'MM/DD/YYYY HH:MM:SS' );
endtime VARCHAR2( 50 );
TYPE t_rec IS TABLE OF s_contact.row_id%TYPE
INDEX BY BINARY_INTEGER;
v_rec t_rec;
BEGIN
DBMS_OUTPUT.ENABLE(1000000);
SELECT row_id
BULK COLLECT INTO v_rec
FROM s_contact
WHERE ( x_const_sub_type != 'Legislator' OR
x_const_sub_type IS NULL );
FORALL i IN r_rec.FIRST..r_rec.LAST
UPDATE s_contact
SET x_cl_cong_dist = 'No Match',
x_cl_st_hse_dist = 'No Match',
x_cl_st_sen_dist = 'No Match',
x_cl_state = 'No Match'
WHERE s_contact.row_id = v_rec(i);
v_records_updated := SQL%BULK_ROWCOUNT;
COMMIT;
endtime := TO_CHAR( SYSDATE, 'MM/DD/YYYY HH:MM:SS' );
DBMS_OUTPUT.put_line( 'StartTime ' || starttime );
DBMS_OUTPUT.put_line( 'EndTime ' || endtime );
DBMS_OUTPUT.put_line( 'Records Updated ' || v_total_records_updated );
END reset_advocacy_data;
PROCEDURE set_advocacy_data
IS
v_records_updated BINARY_INTEGER := 0;
currenttime VARCHAR2( 50 );
CURSOR c_numrecs IS
SELECT pkey -- insert pkey here
FROM temp_lds_addr_ids;
v_pkey c_numrecs%ROWTYPE;
TYPE t_rec IS RECORD
( zipcode temp_lds_addr_ids.zipcode%TYPE,
per_id temp_lds_addr_ids.per_id%TYPE,
row_id temp_lds_addr_ids.row_id%TYPE,
cd cx_zip_district.cd%TYPE,
hd cx_zip_district.hd%TYPE,
sd cx_zip_district.sd%TYPE,
st cx_zip_district.st%TYPE );
TYPE r_rec IS TABLE OF t_rec
INDEX BY BINARY_INTEGER;
v_rec r_rec;
BEGIN
DBMS_OUTPUT.ENABLE(1000000);
starttime := TO_CHAR( SYSDATE, 'MM/DD/YYYY HH:MM:SS' );
OPEN v_pkey;
FETCH c_numrecs INTO v_pkey ;
IF c_numrecs$NOTFOUND
THEN
EXECUTE IMMEDIATE 'TRUNCATE table temp_lds_addr_ids';
COMMIT;
currenttime := TO_CHAR( SYSDATE, 'MM/DD/YYYY HH:MM:SS' );
DBMS_OUTPUT.put_line
( 'Inserting records into temp_lds_addr_ids. Time: ' || currenttime );
INSERT /*+ APPEND */
INTO temp_lds_addr_ids
SELECT /*+ PARALLEL (A, 4) */
zipcode, per_id, row_id
FROM s_addr_per a
WHERE addr_type_cd = 'Home' AND
LENGTH( zipcode ) >= 9 AND
( ( SUBSTR( zipcode, 1, 5 ) BETWEEN '00000' AND '99999' ) AND
( SUBSTR( zipcode, -4, 4 ) BETWEEN '0000' AND '9999' ) );
COMMIT;
currenttime := TO_CHAR( SYSDATE, 'MM/DD/YYYY HH:MM:SS' );
DBMS_OUTPUT.put_line
( 'temp_lds_addr_ids build completed. Time: ' || currenttime );
END IF;
SELECT zipcode, per_id, row_id,
NVL(zip.cd, 'No Match') cd,
NVL(zip.hd, 'No Match') hd,
NVL(zip.sd, 'No Match') sd,
NVL(zip.st, 'No Match') st
FROM temp_lds_addr_ids
BULK COLLECT INTO v_rec
LEFT OUTER JOIN
( SELECT cd, hd, sd, st
FROM cx_zip_district ) zip
ON zip5 = SUBSTR( s_addr_per_rec.zipcode, 1, 5 ) AND
zip4start <= SUBSTR( s_addr_per_rec.zipcode, -4, 4 ) AND
zip4end >= SUBSTR( s_addr_per_rec.zipcode, -4, 4 );
FOR i IN v_rec.FIRST..v_rec.LAST
LOOP
UPDATE s_contact
SET x_cl_cong_dist = v_rec(i).cd,
x_cl_st_hse_dist = v_rec(i).hd,
x_cl_st_sen_dist = v_rec(i).sd,
x_cl_state = v_rec(i).st
WHERE row_id = v_rec(i).cd.per_id AND
con_cd = 'Individual';
END LOOP;
FORALL i IN v_rec.FIRST..v_rec.LAST
DELETE temp_lds_addr_ids
WHERE row_id = v_rec(i).row_id;
v_records_updated := SQL%BULK_ROWCOUNT;
COMMIT;
CLOSE c_numrecs;
endtime := TO_CHAR( SYSDATE, 'MM/DD/YYYY HH:MM:SS' );
DBMS_OUTPUT.put_line( 'StartTime ' || starttime );
DBMS_OUTPUT.put_line( 'EndTime ' || endtime );
DBMS_OUTPUT.put_line( 'Records Updated ' || v_total_records_updated );
END set_advocacy_data;
END advocacy_pkg;
-
For this code ...
Code:
CURSOR GET_S_CONTACT_REC IS
select row_id
from S_CONTACT
where (x_const_sub_type != 'Legislator' or x_const_sub_type is null);
S_CONTACT_REC GET_S_CONTACT_REC%ROWTYPE;
BEGIN
COMMIT;
starttime := to_char(sysdate,'MM/DD/YYYY HH:MM:SS');
OPEN GET_S_CONTACT_REC;
v_records_updated := 0;
v_total_records_updated :=0;
LOOP
FETCH GET_S_CONTACT_REC INTO S_CONTACT_REC;
EXIT WHEN GET_S_CONTACT_REC%NOTFOUND;
update S_CONTACT
set x_cl_cong_dist = 'No Match',
x_cl_st_hse_dist = 'No Match',
x_cl_st_sen_dist = 'No Match',
x_cl_state = 'No Match'
where S_CONTACT.row_id = S_CONTACT_REC.row_id;
v_records_updated := v_records_updated + SQL%ROWCOUNT;
v_total_records_updated := v_total_records_updated + SQL%ROWCOUNT;
IF v_records_updated >= 5000 THEN
COMMIT;
v_records_updated := 0;
END IF;
END LOOP;
COMMIT;
CLOSE GET_S_CONTACT_REC;
... you could try ...
Code:
update S_CONTACT
set x_cl_cong_dist = 'No Match',
x_cl_st_hse_dist = 'No Match',
x_cl_st_sen_dist = 'No Match',
x_cl_state = 'No Match'
where (x_const_sub_type != 'Legislator' or x_const_sub_type is null) and
(x_cl_cong_dist != 'No Match' or
x_cl_st_hse_dist != 'No Match' or
x_cl_st_sen_dist != 'No Match' or
x_cl_state != 'No Match')
Apart from the commits, it's identical i think. If rollback space is an issue then just ...
Code:
update S_CONTACT
set x_cl_cong_dist = 'No Match',
x_cl_st_hse_dist = 'No Match',
x_cl_st_sen_dist = 'No Match',
x_cl_state = 'No Match'
where (x_const_sub_type != 'Legislator' or x_const_sub_type is null) and
(x_cl_cong_dist != 'No Match' or
x_cl_st_hse_dist != 'No Match' or
x_cl_st_sen_dist != 'No Match' or
x_cl_state != 'No Match') and
rownum < 5001; -- this number could probably be increased easily
Commit;
... until no updates are performed
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|