DBAsupport.com Forums - Powered by vBulletin
Page 3 of 4 FirstFirst 1234 LastLast
Results 21 to 30 of 36

Thread: Interesting problem, anybody have it before?

  1. #21
    Join Date
    Nov 2001
    Posts
    335
    jmodic,

    Very well put! Finally somebody stoped this BS.
    One, who thinks that the other one who thinks that know and does not know, does not know either!

  2. #22
    Join Date
    Apr 2002
    Location
    Shenzhen, China
    Posts
    327
    I think Rollback Segment or Undo Tablespace could be loaded even created (if Oracle provide this feature) into physical memory because Redo Logfile store all Before and After image. But You will not be able to use Flashback function in Oracle 9i.

    Another question, if you delete a table with NOLOGGING, and the instance crash before commiting. After instance recovery, whether you will get the data back?
    Oracle Certified Master - September, 2003, the Second OCM in China
    *** LOOKING for PART TIME JOB***
    Data Warehouse & Business Intelligence Expert
    MCSE, CCNA, SCJP, SCSA from 1998

  3. #23
    Join Date
    Apr 2001
    Location
    Czechia
    Posts
    712
    Originally posted by Calvin_Qiu
    Another question, if you delete a table with NOLOGGING, and the instance crash before commiting. After instance recovery, whether you will get the data back?
    NOLOGGING has nothing to do with the DELETE statement. It's useful just with direct-load-insert and direct mode of SQL*Loader. All other DML are fully logged.
    Thus, the table behaves exactly as other tables and the transaction will be rollback-ed.
    Ales

  4. #24
    Join Date
    Jun 2001
    Location
    Helsinki. Finland
    Posts
    3,938
    Originally posted by Calvin_Qiu
    Another question, if you delete a table with NOLOGGING, and the instance crash before commiting. After instance recovery, whether you will get the data back?
    Only the following operations can make use of no-logging mode:

    direct load (SQL*Loader)
    direct-load INSERT
    CREATE TABLE ... AS SELECT
    CREATE INDEX
    ALTER TABLE ... MOVE PARTITION
    ALTER TABLE ... SPLIT PARTITION
    ALTER INDEX ... SPLIT PARTITION
    ALTER INDEX ... REBUILD
    ALTER INDEX ... REBUILD PARTITION
    INSERT, UPDATE, and DELETE on LOBs in NOCACHE NOLOGGING mode stored out of line.
    Oracle Certified Master
    Oracle Certified Professional 6i,8i,9i,10g,11g,12c
    email: ocp_9i@yahoo.com

  5. #25
    Join Date
    Mar 2001
    Location
    Ireland/Dublin
    Posts
    688
    Please check this link about
    PARAMETER : _corrupted_rollback_segments

    http://www.dbasupport.com/oracle/faq/Detailed/315.shtml
    Best wishes!
    Dmitri

  6. #26
    Join Date
    Apr 2001
    Location
    Louisville KY
    Posts
    295
    We may terminology or whatever differences, but I believe the actual prfocess for processing, such as deletes, is

    1- Delete * from table where field='ok'
    2- table blocks read from disk (if not in memory) and
    scanned for 'where'
    3- if block changed, it is copied to the ROLLBACK segment and entries made in REDO logs
    4- for a 10G table with 100M buffers, you will most likely get 'snapshot too old' errors on Rollback segment.
    5- if db buffer is needed it is used.
    6- transaction committed then db block buffers updated and written (may involve a re-fetch)
    or
    7- if transaction rolled back, block buffers ignored
    8- rollback segment wiped.
    9- redo logs updated.
    Joseph R.P. Maloney, CSP,CDP,CCP
    'The answer is 42'

  7. #27
    Join Date
    Dec 2000
    Location
    Ljubljana, Slovenia
    Posts
    4,439
    Originally posted by jrpm
    We may terminology or whatever differences, but I believe the actual prfocess for processing, such as deletes, is

    1- Delete * from table where field='ok'
    2- table blocks read from disk (if not in memory) and
    scanned for 'where'
    3- if block changed, it is copied to the ROLLBACK segment and entries made in REDO logs
    The whole thing is correct only up to this point. From here on it is more or less wrong....

    4- for a 10G table with 100M buffers, you will most likely get 'snapshot too old' errors on Rollback segment.

    This is total nonsence. What has the size of the buffer cache to do with ORA-1555? I can allways demontrate you that with say 1M buffers and 10G table I can allways delete the table without having 'snapshot to old" error.

    5- if db buffer is needed it is used.

    I don't understand this point. Whenever buffer is needed it is used, how could it be any different?

    6- transaction committed then db block buffers updated and written (may involve a re-fetch)

    Oracle will never do that (re-fetch because of a commit). Never! Check for the term "defered block cleanout" in Oracle literature - you'll find out that blocks (that have been previously written to disk as "dirty", before the commit) will only be cleaned out when the next user process first re-reads them into the cache - and that could be days or moths or years after the transaction that changed it has been commited!

    or
    7- if transaction rolled back, block buffers ignored

    Of coures not! Rollback process will reconstruct all the changed blocks again in the buffer blocks and the changed blocks that have allready been written to the datafiles are overwritten again by their original immage from the buffer cache.

    8- rollback segment wiped.

    Wrong again. neither commit nor rollback will cause rollback segments to be wiped. Their contents might still be needed by other long running transactions that started before your transaction had. If they had been wiped after every commit other session would get enormously frequent 'snapshot too old' errors! Oracle will try to maintain the contents of transaction cahnges in rollback segments as long as possible after commit to be able to reconstruct read consistent view for other transactions.

    Bottom line again:
    Changed buffers can be and very often are written to database files long before the transaction is commited or rolled back. After all, simply read about checkpoints - whenever the checkpoint occurs, the changed buffers will unconditionaly be written to database files. No matter if the transactions that have changed them are finished or not.

    If it was like you claimed ("changed buffers are never written to database files before they are commited"), then Oracle would not be able to perform any transaction that is larger than the size of the buffer cache. Which of course is very far from truth. Transaction size is never limited by the size of your buffer cache.
    Jurij Modic
    ASCII a stupid question, get a stupid ANSI
    24 hours in a day .... 24 beer in a case .... coincidence?

  8. #28
    Join Date
    Sep 2001
    Location
    NJ, USA
    Posts
    1,287
    Originally posted by jmodic
    Originally posted by Shestakov
    I espesialy reserched this situation, and my results:
    Oracle never wrote dirty blocks to disk before commit.
    That's balooney. I would be interested in how you came to that conclusion, I mean how did you perform your tests?

    I mean, every Oracle beginer can demonstrate you the oposite....
    I was thinking... Should I respond to this post or not?
    Because Juriy, in my mind, think that he is "Best Oracloid" and nobody knows Oracle better then he. A lot of sarcasm.
    Is this correct behaviour for moderators?
    ---------------------------------------------------------------------------------------
    About techlogogy of testing.
    NOTE! This is only common way (without detailes).

    He have two ways to do this:

    1) standard and not pletty good -->
    -- use "alter system dump datafile ... ;" for dump block from disk
    -- use "oradebug dump buffes" for dump buffers from db cache
    also we can use "alter session ... trace ..."
    -- use function rowtonum (from ixora.com) for conver Hex memory adderss to decimal digit address
    -- use select rowtohex(addr) from x$bh where file# = .. and buf# = ..;
    for get information about real addres of buffers in memory,
    and so on...

    2) Second way use UNIX (not oracle) function and commands :
    (and more better)
    -- 1 step -->
    reflect memory to character file ( /dev/mem )
    mknod -m 660 /dev/mem c 1 1
    chown root:mem /dev/mem
    NOTE! We must :
    -- have READ permitions to /dev/mem
    or
    -- must connect as root
    or
    -- must have root priviligies (with ur environment)

    -- 2 step :
    write perl program that
    a) read this file (memory) with offset = rowtohex(addr) from x$bh
    b) read block from oracle datafile with offset = blocksize * block#
    c) convert both blocks to "hex"
    compare two "string" variables and print:
    -- both blocks
    -- difference between blocks
    -- timestamps of read operations

    ...
    open (MEM, "/dev/mem") || or die "Error in open /dev/mem file \n\n";
    seek(MEM, $dec_address);
    ...

    open (ORA, $ora_data) || or die "Error in open oracle file \n\n";
    seek(ORA, $blk_num * $blk_size);
    --
    read(MEM, $mem_block, $blk_size);
    read(ORA, $db_block, $blk_size);
    ...
    # convert and compare
    ...
    This is way for research db cache and datafile blocks.


  9. #29
    Join Date
    Nov 2001
    Posts
    335
    [QUOTE]Originally posted by Shestakov
    [B]
    Originally posted by jmodic
    Originally posted by Shestakov
    I espesialy reserched this situation, and my results:
    Oracle never wrote dirty blocks to disk before commit.
    That's balooney. I would be interested in how you came to that conclusion, I mean how did you perform your tests?

    I mean, every Oracle beginer can demonstrate you the oposite....
    I was thinking... Should I respond to this post or not?
    Because Juriy, in my mind, think that he is "Best Oracloid" and nobody knows Oracle better then he. A lot of sarcasm.
    Is this correct behaviour for moderators?
    ---------------------------------------------------------------------------------------
    I guess you did not like balooney,huh? Well, how does technical term nonsense sound? You can research differences between memory and disk as much as you want that does not prove a point .
    Bottom line remains:

    If oracle does not write dirty(changed) blocks to disk, how can it perform transaction with the size bigger that buffer cache?
    One, who thinks that the other one who thinks that know and does not know, does not know either!

  10. #30
    Join Date
    Sep 2001
    Location
    NJ, USA
    Posts
    1,287
    Originally posted by BV1963

    Bottom line remains:

    If oracle does not write dirty(changed) blocks to disk, how can it perform transaction with the size bigger that buffer cache? [/B]
    See my post (on page 2)-->
    Oracle will write DATABLOCKS (not RBS) only when its blocks had been fixed in log file.
    Oracle may write RBS BLOCKS during transaction, but always AFTER this block had been fixed in log file.
    DBRW can (but not must) write dirty blocks to disk only if a server process
    can't find clean reusable buffers. But In this case LGWR MUST write dirty blocks to log BEFORE.
    Current log is a basis information source for recovery process.


    [Edited by Shestakov on 06-14-2002 at 03:57 PM]

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width