DBAsupport.com Forums - Powered by vBulletin
Page 2 of 2 FirstFirst 12
Results 11 to 20 of 20

Thread: index corruption

  1. #11
    Join Date
    Apr 2000
    Location
    Baltimore, MD
    Posts
    759

    Talking

    Well, according to a doc I checked on Metalink, they need to be rebuilt.

  2. #12
    Join Date
    Nov 2000
    Location
    Baltimore, MD USA
    Posts
    1,339
    Alrighty then, guess I need to ease back on the Zyban .

  3. #13
    Join Date
    Apr 2000
    Location
    Baltimore, MD
    Posts
    759

    Talking

    I need to start taking those. Are they any good?? Any side effects?? LOL

  4. #14
    Join Date
    Nov 2000
    Location
    Baltimore, MD USA
    Posts
    1,339
    I like 'em. Been almost 4 weeks and no cigarrettes...and I haven't killed anybody, so I guess they're workin'

    - Chris

  5. #15
    Join Date
    Mar 2000
    Location
    Chennai.Tamilnadu.India.
    Posts
    658

    Block corruption Investigate

    Hi, 5th May 2001 14:15 hrs chennai

    Corruption i dont think happens due to index problems.

    You can well think it as BLOCK (DATA,INDEX,CLUSTER)corruption.

    See this is an article which can very clearly help you how to investigate on the problem and the possible reason for your situation.

    I would advice when you get such a problem to work out on that (Investigate) properly since we as a DBA wont get a good chance (experience) on situations like this.

    Please give me your feedback after investigation.
    ==========
    Each Oracle data block is written in a proprietary binary format. Before an Oracle data block is used, Oracle checks it for possible block corruption. A block corruption is considered to exist if the format of the data block doesn't conform to its format. This article discusses how an Oracle DBA can deal with data block corruptions.

    Checking for data block corruption is performed at the cache and other higher layers of the Oracle code. Information checked at the cache layer includes the following:

    Block type

    Block incarnation

    Block version

    Block sequence number

    Data block address

    Block checksum

    If an inconsistency in block format is identified at the cache layer, the block is considered to be media corrupt, whereas an inconsistency identified at a higher layer of Oracle code marks the block as being software corrupt.

    Information in the corrupt block is more or less lost; you will have to re-create it by using some data backup or export. Oracle has several tools—such as the Data Unloader (DUL) utility, which you can use to extract the data out of bad blocks—but typically, using these techniques is very expensive. You have to weigh the cost of using those tools (which aren't guaranteed to be completely successful) and the cost of re-creating the lost information.

    Usually you see an error message such as ORA-1578 or ORA-600 when Oracle encounters corrupt blocks. You can use several techniques to determine whether the database is corrupt and also to understand the nature and extent of the corruption.

    Analyze the Table
    By analyzing the table structure and its associated objects, you can perform a detailed check of data blocks to identify block corruptions:

    Analyze table table_name validate structure cascade;
    Data blocks are checked at the cache and higher levels. Index blocks are also checked, and the one-to-one association between the table data and its index rows is verified.

    Use DB_VERIFY
    Oracle provides the DB_VERIFY utility to validate datafiles even when a datafile is offline or the database is unavailable. DB_VERIFY is an external command-line utility that verifies the integrity of database files and the physical structure of an offline database.


    Using Oracle Checksum Facilities
    Oracle provides a number of checksum facilities that you can use to identify corrupt blocks. Checksum facilities are enabled by setting parameters and events in the init.ora file:

    Setting db_block_checksum to TRUE causes checksums to be calculated for all data blocks on their next update. The database writer performs the task of calculating checksums. The checksum for a data block is stored in its cache header when writing it to disk.

    NOTE

    After a checksum is generated for a block, the block always use checksums even if the parameter is later removed.

    Setting log_block_checksum to TRUE causes checksums to be calculated for all redo log blocks.

    Setting events 10210, 10211, 10212, and 10225 can be done by adding the following line for each event in the init.ora file:

    Event = "event_number trace name errorstack forever, level 10"

    When event 10210 is set, the data blocks are checked for corruption by checking their integrity. Data blocks that don't match the format are marked as soft corrupt.

    When event 10211 is set, the index blocks are checked for corruption by checking their integrity. Index blocks that don't match the format are marked as soft corrupt.

    When event 10212 is set, the cluster blocks are checked for corruption by checking their integrity. Cluster blocks that don't match the format are marked as soft corrupt.

    When event 10225 is set, the fet$ and uset$ dictionary tables are checked for corruption by checking their integrity. Blocks that don't match the format are marked as soft corrupt.

    Setting _db_block_cache_protect to TRUE protects the cache layer from becoming corrupted. It might crash the database instance, but the corruption isn't written to the disk.

    Salvaging Data from a Corrupt Oracle Database
    An Oracle database can become corrupt for various reasons, such as these:

    Bad hardware

    Operating system bugs

    I/O or caching problems

    Unsupported disk repair utilities running

    Memory problems

    Oracle bugs

    A computer virus

    Database corruption frequently results from problems with the hardware, so you should first resolve any hardware problems or reported operating system errors. When all the non-Oracle problems are resolved, you can embark on the adventure of recovering the data from the corrupt database. Follow these steps to recover a corrupt database:

    Determine the extent of the damage. The information in the alert log, trace files, and the complete error message(s) reported by Oracle can provide enough information to determine the extent of database damage. Suppose that the error message indicates that the damage is done to file# (F) and block# (B).

    Connect to Server Manager as internal.

    Determine the file identified as corrupt:

    Svrmgr> SELECT name
    FROM v$datafile WHERE file# = F;

    Determine the damaged object:

    Svrmgr> SELECT owner, segment_name, segment_type
    FROM dba_extents WHERE file_id = F AND B BETWEEN block_id AND block_id + blocks - 1;

    Perform the recovery. The recovery approach depends on the damaged segment, as determined by the query in step 4:

    For rollback segments or data dictionary corruption, consult Oracle support because this involves using several undocumented parameters that can potentially make the system unrecoverable.

    For index segments, determine the table that the index belongs to, as follows:

    Svrgmgr> SELECT table_owner, table_name
    FROM dba_indexes WHERE index_name = 'segment_name';
    For cluster segments, determine the table associated with the cluster, as follows:

    Svrmgr> SELECT owner, table_name
    FROM dba_tables WHERE cluster_name = 'segment_name';
    For user tables, note the name of the table and its owner.

    Make sure that the problem isn't intermittent by running the ANALYZE command on the segment at least twice. At the Server Manager prompt, use the following for a table:

    Svrmgr> analyze table owner.tablename validate structure cascade;
    Use this command for an index:

    Svrmgr> analyze table owner.tablename validate structure cascade;

    Use this command for a cluster:

    Svrmgr> analyze cluster owner.clustername validate
    structure cascade;

    If a particular hardware or controller is identified as bad, relocate the files to a good disk:

    For a database in archivelog mode, move the corrupt datafile offline, and then restore it from backup onto a good disk. Recover the datafile, and then put it back online. The file can now be used.

    For a database in noarchivelog mode, move the corrupt datafile offline, and then restore it from backup onto a good disk. Put the datafile back online; the file can now be used.

    Perform the analysis on the object again to make sure that it's no longer corrupt.

    At this point, data from the damaged blocks can be salvaged by using several techniques:

    You can perform media recovery to recover the database to a state before the corruption.

    You can drop and re-create the object (table or index) by using a valid export.

    If you know the file number and the block number indicating the corruption, you can salvage the data in the corrupt table by selecting around the bad blocks.

    Set event 10231 in the init.ora file to cause Oracle to skip software- and media-corrupted blocks when performing full table scans:

    Event="10231 trace name context forever, level 10"
    Set event 10233 in the init.ora file to cause Oracle to skip software- and media-corrupted blocks when performing index range scans:

    Event="10233 trace name context forever, level 10"

    TIP

    Oracle Support Services has access to several tools, such as Data Unloader (DUL) and BBED (Block Editor), that you can use to extract data from bad blocks. These tools are expensive, though, and there's no guarantee that all the data can be salvaged.
    ===========
    Cheers

    Padmam


    [Edited by padmam on 05-05-2001 at 05:19 AM]
    Attitude:Attack every problem with enthusiasam ...as if your survival depends upon it

  6. #16
    Join Date
    Aug 2000
    Location
    Belgium
    Posts
    342
    in 8.1.5 indexes become 'UNUSABLE' after an 'alter table move... ' and have to be manually recreated.

    in 8.1.6 b( and + ) indexes are automatically recreated when using 'alter table move.. '

    Regards
    Gert

  7. #17
    Join Date
    Jun 2000
    Location
    Madrid, Spain
    Posts
    7,447
    Denevge the indexes are still unusable in 8.1.6 when you do alter table... move tablespace ......

    Anyone tried 8.1.7?

  8. #18
    Join Date
    Aug 2000
    Location
    Belgium
    Posts
    342

    Question

    Sorry, I was wrong.

    I did the test on 8.1.7, and the index stays 'UNUSABLE'.
    After a rebuild, the index is valid.

    Why would oracle do that ?
    If you move the table, you records are given new ROWID's. ( new table is created )
    If you rebuild an index, the index is rebuild using the info in the index.
    If we can rebuild the index, this means that the ROWID in the index gets updated with the new ROWID's in the table during the move. Otherwise we would have to recreate the index, and is rebuilding not possible.

    Anyone an idea ?

    Regards
    Gert

  9. #19
    Join Date
    Jan 2003
    Location
    india
    Posts
    175

    unusable index

    hi,

    "alter table .. move .."
    will cause all the indexes of that table unusable.

    you can not make any queries that involves indexed columns
    in where clause.

    you can not analyze that table.

    you have to rebuild the table. then only you can do
    normal operations on that table.

    bye now.

    -raja

  10. #20
    Join Date
    Nov 2000
    Location
    Baltimore, MD USA
    Posts
    1,339
    Uh, raja...

    Call me crazy, but I'm thinking the original post-er hasn't been waiting around for an answer for the past 2 years.

    Did you have to go that far back in the archives to find a question you could answer or what?

    - Chris
    Christopher R. Long
    ChrisRLong@HotMail.Com
    But that's just my opinion. I could be wrong

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


Click Here to Expand Forum to Full Width