I have been having a terrible perfomance problem with one of my databases and it seems as if log switches occur very frequently(within seconds). I am begining to think that my log_buffers are too small. As it is now, it is set to 8192 and my log_checkpoint_interval is set to 10000. How much of a performance problem does a small log_buffer cause to a database? How large can I increase it? Is the log_checkpoint_interval too small? How can I determine what is setting off the log switch(log_checkpoint_interval or log_buffer size)? I am thinking of increasing it to like 163840.
Thanks in advance for your suggestions.
Log switches occur either when the current log fills or the logfile is manually switched. Having log_checkpoint_interval to small would affect how often the database checkpointed, not force a log switch. The log_buffer size might cause lgwr to work harder than he has to, but would not force a checkpoint.
Instead increasing the logbuffer, it is advisible to increase the logfile size to reduce the log switches, if you feel that it has performance problem.
As Jeff mentioned "Log switches occur either when the current log fills or the logfile is manually switched". There will not be init.ora parameters influence on log switch. 'Resizing logs" is the best solution for the problem.
We have gone over this little while ago. If you are interested follow this link.
[Edited by sreddy on 02-09-2001 at 01:28 PM]
Thanks for your help so far. I noticed that the logfiles are in the system tablespace. Will I be able to increase the size of the existing one? I know I can add new ones. Thanks ones again. I think I could figure it out.
"I noticed that the logfiles are in the system tablespace"
What do you mean by above ?
Logfiles are not in any tablespace; They are OS files just like your datafiles.
Whoa, whoa, whoa. Log files don't live in a tablespace, they are individual OS entities (files or raw devices).
To increase the size of logfiles, add new groups with greater size and drop the older groups.
I saw that no one mentioned the log buffer...
the log buffer clears every commit (independent of who issued it), or after 3 seconds, or when it becomes 1/3 full.
The log buffer should be able to hold 30% of the average transaction in the database times the number of concurrent active sessions. Most companies I have looked at very from 16MB to 128MB... you should really not go much more than this.
To decrease the speed of the log switches increase the size of the redo logs. You can do this be creating new ones, and dropping the old ones. You should also alternate devices on which the redo logs are stored. This will reduce contention while the database is creating the archive log. For example
Log1 is on /db01
Log2 is on /db02
Log3 is on /db01
Log4 is on /db02
and they should also be seperated from your datafiles and the archive log destination directory.
If you are still experiencing performance issues then perhaps you should look at the stripe size (or remove stripping entirely from that device) of the devices and increase it. The log writes in serial, and does not take advantage of the random access performance gain in striping.
The log buffer should be able to hold 30% of the average transaction in the database times the number of concurrent active sessions.
You must be joking
Most companies I have looked at very from 16MB to 128MB... you should really not go much more than this
Must be joking too unless there is any hidden parameters that we dont know here, no matter how big is your log buffer
it will flush every 1MB. The 1/3 flushing algorithm you can change it by changing a hidden parameter _log_io_size (I think this is the name not 100% sure), you can change it to 1/2, 1/4 whatever you like.
The only place I have seen 100Mb log buffer was in one of our clients which obviously was totally useless causing nothing but trouble with hundreds of thoughsands of log file sync wait event
[Edited by pando on 02-09-2001 at 06:35 PM]
Click Here to Expand Forum to Full Width