Corruption i dont think happens due to index problems.
You can well think it as BLOCK (DATA,INDEX,CLUSTER)corruption.
See this is an article which can very clearly help you how to investigate on the problem and the possible reason for your situation.
I would advice when you get such a problem to work out on that (Investigate) properly since we as a DBA wont get a good chance (experience) on situations like this.
Please give me your feedback after investigation.
Each Oracle data block is written in a proprietary binary format. Before an Oracle data block is used, Oracle checks it for possible block corruption. A block corruption is considered to exist if the format of the data block doesn't conform to its format. This article discusses how an Oracle DBA can deal with data block corruptions.
Checking for data block corruption is performed at the cache and other higher layers of the Oracle code. Information checked at the cache layer includes the following:
Block sequence number
Data block address
If an inconsistency in block format is identified at the cache layer, the block is considered to be media corrupt, whereas an inconsistency identified at a higher layer of Oracle code marks the block as being software corrupt.
Information in the corrupt block is more or less lost; you will have to re-create it by using some data backup or export. Oracle has several toolsósuch as the Data Unloader (DUL) utility, which you can use to extract the data out of bad blocksóbut typically, using these techniques is very expensive. You have to weigh the cost of using those tools (which aren't guaranteed to be completely successful) and the cost of re-creating the lost information.
Usually you see an error message such as ORA-1578 or ORA-600 when Oracle encounters corrupt blocks. You can use several techniques to determine whether the database is corrupt and also to understand the nature and extent of the corruption.
Analyze the Table
By analyzing the table structure and its associated objects, you can perform a detailed check of data blocks to identify block corruptions:
Analyze table table_name validate structure cascade;
Data blocks are checked at the cache and higher levels. Index blocks are also checked, and the one-to-one association between the table data and its index rows is verified.
Oracle provides the DB_VERIFY utility to validate datafiles even when a datafile is offline or the database is unavailable. DB_VERIFY is an external command-line utility that verifies the integrity of database files and the physical structure of an offline database.
Using Oracle Checksum Facilities
Oracle provides a number of checksum facilities that you can use to identify corrupt blocks. Checksum facilities are enabled by setting parameters and events in the init.ora file:
Setting db_block_checksum to TRUE causes checksums to be calculated for all data blocks on their next update. The database writer performs the task of calculating checksums. The checksum for a data block is stored in its cache header when writing it to disk.
After a checksum is generated for a block, the block always use checksums even if the parameter is later removed.
Setting log_block_checksum to TRUE causes checksums to be calculated for all redo log blocks.
Setting events 10210, 10211, 10212, and 10225 can be done by adding the following line for each event in the init.ora file:
Event = "event_number trace name errorstack forever, level 10"
When event 10210 is set, the data blocks are checked for corruption by checking their integrity. Data blocks that don't match the format are marked as soft corrupt.
When event 10211 is set, the index blocks are checked for corruption by checking their integrity. Index blocks that don't match the format are marked as soft corrupt.
When event 10212 is set, the cluster blocks are checked for corruption by checking their integrity. Cluster blocks that don't match the format are marked as soft corrupt.
When event 10225 is set, the fet$ and uset$ dictionary tables are checked for corruption by checking their integrity. Blocks that don't match the format are marked as soft corrupt.
Setting _db_block_cache_protect to TRUE protects the cache layer from becoming corrupted. It might crash the database instance, but the corruption isn't written to the disk.
Salvaging Data from a Corrupt Oracle Database
An Oracle database can become corrupt for various reasons, such as these:
Operating system bugs
I/O or caching problems
Unsupported disk repair utilities running
A computer virus
Database corruption frequently results from problems with the hardware, so you should first resolve any hardware problems or reported operating system errors. When all the non-Oracle problems are resolved, you can embark on the adventure of recovering the data from the corrupt database. Follow these steps to recover a corrupt database:
Determine the extent of the damage. The information in the alert log, trace files, and the complete error message(s) reported by Oracle can provide enough information to determine the extent of database damage. Suppose that the error message indicates that the damage is done to file# (F) and block# (B).
Connect to Server Manager as internal.
Determine the file identified as corrupt:
Svrmgr> SELECT name
FROM v$datafile WHERE file# = F;
Determine the damaged object:
Svrmgr> SELECT owner, segment_name, segment_type
FROM dba_extents WHERE file_id = F AND B BETWEEN block_id AND block_id + blocks - 1;
Perform the recovery. The recovery approach depends on the damaged segment, as determined by the query in step 4:
For rollback segments or data dictionary corruption, consult Oracle support because this involves using several undocumented parameters that can potentially make the system unrecoverable.
For index segments, determine the table that the index belongs to, as follows:
Svrgmgr> SELECT table_owner, table_name
FROM dba_indexes WHERE index_name = 'segment_name';
For cluster segments, determine the table associated with the cluster, as follows:
Svrmgr> SELECT owner, table_name
FROM dba_tables WHERE cluster_name = 'segment_name';
For user tables, note the name of the table and its owner.
Make sure that the problem isn't intermittent by running the ANALYZE command on the segment at least twice. At the Server Manager prompt, use the following for a table:
Svrmgr> analyze table owner.tablename validate structure cascade;
Use this command for an index:
If a particular hardware or controller is identified as bad, relocate the files to a good disk:
For a database in archivelog mode, move the corrupt datafile offline, and then restore it from backup onto a good disk. Recover the datafile, and then put it back online. The file can now be used.
For a database in noarchivelog mode, move the corrupt datafile offline, and then restore it from backup onto a good disk. Put the datafile back online; the file can now be used.
Perform the analysis on the object again to make sure that it's no longer corrupt.
At this point, data from the damaged blocks can be salvaged by using several techniques:
You can perform media recovery to recover the database to a state before the corruption.
You can drop and re-create the object (table or index) by using a valid export.
If you know the file number and the block number indicating the corruption, you can salvage the data in the corrupt table by selecting around the bad blocks.
Set event 10231 in the init.ora file to cause Oracle to skip software- and media-corrupted blocks when performing full table scans:
Event="10231 trace name context forever, level 10"
Set event 10233 in the init.ora file to cause Oracle to skip software- and media-corrupted blocks when performing index range scans:
Event="10233 trace name context forever, level 10"
Oracle Support Services has access to several tools, such as Data Unloader (DUL) and BBED (Block Editor), that you can use to extract data from bad blocks. These tools are expensive, though, and there's no guarantee that all the data can be salvaged.
[Edited by padmam on 05-05-2001 at 05:19 AM]
Attitude:Attack every problem with enthusiasam ...as if your survival depends upon it
I did the test on 8.1.7, and the index stays 'UNUSABLE'.
After a rebuild, the index is valid.
Why would oracle do that ?
If you move the table, you records are given new ROWID's. ( new table is created )
If you rebuild an index, the index is rebuild using the info in the index.
If we can rebuild the index, this means that the ROWID in the index gets updated with the new ROWID's in the table during the move. Otherwise we would have to recreate the index, and is rebuilding not possible.