-
We have 2 databases on different PC,
Second database was created from hot backup of first one.
Problem is: second database runs slower then first one (select, insert, updates takes more time). The performance like waves, sometimes it works fine, sometimes it takes up to 30 seconds to make insert or select.
All database settings are the same.
PC are the same except of HD, first one has 2 HD (RAID 5 and normal disk), second one only one RAID 5.
First PC: sotware & datafiles are located on separed HD.
Database is very small and database buffer hit ratio is 99.5%.
I will appresiate any advise very much.
Best wishes!
Dmitri
-
Hi,
Not a solution to your problem, but I just wanted to know how recreated the database from a hotbackup. I haven't tried to restore froma hotbackup till date and I searched the forums for one and couldn't locate one...
Thanks & Regards
S.Prabhakar
-
Originally posted by sprabhakar
Hi,
Not a solution to your problem, but I just wanted to know how recreated the database from a hotbackup. I haven't tried to restore froma hotbackup till date and I searched the forums for one and couldn't locate one...
Thanks & Regards
S.Prabhakar
Code:
export ORACLE_SID=trax
svrmgrl
connect internal;
shutdown immediate;
!cp /oradata/backup/hot/redo1g01.log /home/oracle/netshare/redo1g02.log
!cp /oradata/backup/hot/redo2g01.log /home/oracle/netshare/redo2g02.log
!cp /oradata/backup/hot/redo3g01.log /home/oracle/netshare/redo3g02.log
!cp /oradata/backup/hot/redo1g01.log /home/oracle/backup/trax/redo1g03.log
!cp /oradata/backup/hot/redo2g01.log /home/oracle/backup/trax/redo2g03.log
!cp /oradata/backup/hot/redo3g01.log /home/oracle/backup/trax/redo3g03.log
!cp /oradata/backup/hot/hosted_data01.dbf /home/oracle/oradata/trax/hosted_data01.dbf
!cp /oradata/backup/hot/admin01.dbf /home/oracle/oradata/trax/admin01.dbf
!cp /oradata/backup/hot/builder_indx01.dbf /home/oracle/oradata/trax/builder_indx01.dbf
!cp /oradata/backup/hot/builder01.dbf /home/oracle/oradata/trax/builder01.dbf
!cp /oradata/backup/hot/trax_indx01.dbf /home/oracle/oradata/trax/trax_indx01.dbf
!cp /oradata/backup/hot/users01.dbf /home/oracle/oradata/trax/users01.dbf
!cp /oradata/backup/hot/rbs01.dbf /home/oracle/oradata/trax/rbs01.dbf
!cp /oradata/backup/hot/bill_indx01.dbf /home/oracle/oradata/trax/bill_indx01.dbf
!cp /oradata/backup/hot/bill01.dbf /home/oracle/oradata/trax/bill01.dbf
!cp /oradata/backup/hot/profile_indx01.dbf /home/oracle/oradata/trax/profile_indx01.dbf
!cp /oradata/backup/hot/profile01.dbf /home/oracle/oradata/trax/profile01.dbf
!cp /oradata/backup/hot/trax01.dbf /home/oracle/oradata/trax/trax01.dbf
!cp /oradata/backup/hot/system01.dbf /home/oracle/oradata/trax/system01.dbf
!cp /oradata/backup/hot/trax_lob01.dbf /home/oracle/oradata/trax/trax_lob01.dbf
!cp /oradata/backup/hot/tv3_data01.dbf /home/oracle/oradata/trax/tv3_data01.dbf
!cp /oradata/backup/hot/tv3_indx01.dbf /home/oracle/oradata/trax/tv3_indx01.dbf
!cp /oradata/backup/hot/control01.ctl /home/oracle/oradata/trax/control01.ctl
!cp /oradata/backup/hot/control02.ctl /home/oracle/oradata/trax/control02.ctl
!cp /oradata/backup/hot/control03.ctl /home/oracle/oradata/trax/control03.ctl
!cp /oradata/backup/hot/arch/arch_1_975.arc /home/oracle/backup/arch/
!cp /oradata/backup/hot/arch/arch_1_976.arc /home/oracle/backup/arch/
!cp /oradata/backup/hot/arch/arch_1_977.arc /home/oracle/backup/arch/
!cp /oradata/backup/hot/arch/arch_1_978.arc /home/oracle/backup/arch/
!cp /oradata/backup/hot/arch/arch_1_979.arc /home/oracle/backup/arch/
!cp /oradata/backup/hot/arch/arch_1_980.arc /home/oracle/backup/arch/
!cp /oradata/backup/hot/arch/arch_1_981.arc /home/oracle/backup/arch/
!cp /oradata/backup/hot/arch/arch_1_982.arc /home/oracle/backup/arch/
!cp /oradata/backup/hot/arch/arch_1_983.arc /home/oracle/backup/arch/
!cp /oradata/backup/hot/arch/arch_1_984.arc /home/oracle/backup/arch/
startup mount;
set autorecovery on;
recover database until cancel using backup controlfile;
alter database open resetlogs;
exit;
Best wishes!
Dmitri
-
Hi kgb,
The solution you have provided would work if you have your instance and only corrupt/lost your data/log files.
Two weeks back my production machine OS was corrupt and we had to reinstall oracle. In this scenario, Could you please explain how to use the hotbackup+archivelogs to restore the database?
Thanks
S.Prabhakar
-
Dmitri, analyze your tables and indexes and see if there is any improvement in performance...
select 'analyze table '||tname|| ' compute statistics;' from tab;
Sandy
"Greatest Rewards come only with Greatest Commitments!"
-
while creating the second database u could cross check if the following points are taken care off.
kernel parameters , initialisation parameters ( buffer settings , block size , swap size , sga etc.. ) , table sizing . indexes sizing . locally managed tablespaces are set properly similar to the earlier database.
siva prakash
DBA
-
I think analyzing tables would be a good idea, as sandy does, but not by using :
select 'analyze table '||tname|| ' compute statistics;' from tab;
1st reason : from tabS, not tab
2nd reason : computing statistics could be quite long, estimating is often really better
3rd reason : to analyze all the tables in a schema, I find it much simpler and easier to use :
exec dbms_utility.analyze_schema('USER','ESTIMATE',30000)
or
exec dbms_utility.analyze_schema('USER','COMPUTE')
depending on how you wish to analyze your tables
)
-
Analyzed, after creation.
No any effects.
Buffer cache is 120Mb, actuall dump file size 9Mb.
Buffer Hit ratio 99.99%
Today I found that commit sometimes cause a problem. It took 16 seconds. I can reproduce that problem only by copying 1Gb file across the disk.
But it is happening without copying big files. And that problem is not regular. Yesterday, I got around 15 'transaction timed out' (more than 30 seconds), today none.
Originally posted by sandycrab
Dmitri, analyze your tables and indexes and see if there is any improvement in performance...
select 'analyze table '||tname|| ' compute statistics;' from tab;
Sandy
Best wishes!
Dmitri
-
Install Oracle software on new machine, better to keep the same directory structure. In previous script I did not copy online redo logs, becuase I had them already.
Put all files in the same location as they were on old machine. Put archived redo logs to a directory of archive_log_dest Do not forget about init.ora file and password file. IF YOU RUN ON UNIX MAKE PATH AS:
log_archive_dest_1 = "location=/home/oracle/backup/arch/ MANDATORY"
It is imporatant to put '/' at the end!
Start recovery, as it is in previous message. During recovery Oracle copies archived redo logs into online redo logs and then applying data from online redo to datafiles.
Originally posted by sprabhakar
Hi kgb,
The solution you have provided would work if you have your instance and only corrupt/lost your data/log files.
Two weeks back my production machine OS was corrupt and we had to reinstall oracle. In this scenario, Could you please explain how to use the hotbackup+archivelogs to restore the database?
Thanks
S.Prabhakar
Best wishes!
Dmitri
-
Not too much differents with kernel parameters RedHat 6.2.
init.ora is the same. No swapping.
All objects structure are the same becuase that is exact copy.
Originally posted by prakashs43
while creating the second database u could cross check if the following points are taken care off.
kernel parameters , initialisation parameters ( buffer settings , block size , swap size , sga etc.. ) , table sizing . indexes sizing . locally managed tablespaces are set properly similar to the earlier database.
Best wishes!
Dmitri
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|