0kb数据文件恢复或者文件丢失恢复

接到一个朋友恢复请求,由于rose频繁切换导致文件系统部分数据文件变化为0kb和文件丢失.
故障现象
部分数据文件变化为0kb和文件丢失.
file_lost
file_size_0


这里比较明显,数据库的users03变为了0kb和users04丢失.数据库alert日志报错信息如下:

Completed: alter database mount exclusive
alter database open
Errors in file E:\APP\ADMINISTRATOR\diag\rdbms\orcl\orcl\trace\orcl_dbw0_12008.trc:
ORA-01157: ????/?????? 7 - ??? DBWR ????
ORA-01110: ???? 7: 'E:\APP\ADMINISTRATOR\ORADATA\ORCL\USERS03.DBF'
ORA-27047: ??????????
OSD-04006: ReadFile() 失败, 无法读取文件
O/S-Error: (OS 38) 已到文件结尾。
Errors in file E:\APP\ADMINISTRATOR\diag\rdbms\orcl\orcl\trace\orcl_dbw0_12008.trc:
ORA-01157: ????/?????? 8 - ??? DBWR ????
ORA-01110: ???? 8: 'E:\APP\ADMINISTRATOR\ORADATA\ORCL\USERS04.DBF'
ORA-27041: ??????
OSD-04002: 无法打开文件
O/S-Error: (OS 2) 系统找不到指定的文件。
Errors in file E:\APP\ADMINISTRATOR\diag\rdbms\orcl\orcl\trace\orcl_ora_12040.trc:
ORA-01157: ????/?????? 7 - ??? DBWR ????
ORA-01110: ???? 7: 'E:\APP\ADMINISTRATOR\ORADATA\ORCL\USERS03.DBF'
ORA-1157 signalled during: alter database open...
Fri May 04 09:35:10 2018
Checker run found 2 new persistent data failures

alert日志的报错也比较明显,users03是文件超过了大小(大小为0kb,读取之后肯定超过大小),users04提示无法打开文件(文件在文件系统层面已经丢失).现在问题比较明显由于文件系统故障导致文件大小为0和丢失

碎片扫描恢复
常规的方法肯定无法恢复,比较好的方法只能是底层碎片扫描重组,结合多种扫描工具,最后发现一个做底层恢复的朋友的工具效果不错,扫描结果如下
file_scan


通过工具分析坏块情况

C:\Users\Administrator>dbv FiLe=D:\0504\ORCL_TS.4_FILE.7_10.ora

DBVERIFY: Release 11.2.0.4.0 - Production on 星期六 5月 5 08:52:53 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

DBVERIFY - 开始验证: FILE = D:\0504\ORCL_TS.4_FILE.7_10.ora

………………

页 382565 标记为损坏
Corrupt block relative dba: 0x01c5d665 (file 7, block 382565)
Completely zero block found during dbv:

页 382566 标记为损坏
Corrupt block relative dba: 0x01c5d666 (file 7, block 382566)
Completely zero block found during dbv:

页 382567 标记为损坏
Corrupt block relative dba: 0x01c5d667 (file 7, block 382567)
Completely zero block found during dbv:



DBVERIFY - 验证完成

检查的页总数: 1374720
处理的页总数 (数据): 27582
失败的页总数 (数据): 0
处理的页总数 (索引): 20114
失败的页总数 (索引): 0
处理的页总数 (其他): 1319752
处理的总页数 (段)  : 0
失败的总页数 (段)  : 0
空的页总数: 1
标记为损坏的总页数: 7271
流入的页总数: 0
加密的总页数        : 0
最高块 SCN            : 228271996 (0.228271996)


C:\Users\Administrator>dbv FiLe=D:\0504\ORCL_TS.4_FILE.8_8.ora

DBVERIFY: Release 11.2.0.4.0 - Production on 星期六 5月 5 08:52:53 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

DBVERIFY - 开始验证: FILE = D:\0504\ORCL_TS.4_FILE.8_8.ora


DBVERIFY - 验证完成

检查的页总数: 1136896
处理的页总数 (数据): 36639
失败的页总数 (数据): 0
处理的页总数 (索引): 57038
失败的页总数 (索引): 0
处理的页总数 (其他): 1043218
处理的总页数 (段)  : 0
失败的总页数 (段)  : 0
空的页总数: 1
标记为损坏的总页数: 0
流入的页总数: 0
加密的总页数        : 0
最高块 SCN            : 228271997 (0.228271997)

C:\Users\Administrator>

scan_resulte


这里通过分析恢复的两个文件总的block数量2511618,其中连续损坏7271个block损坏,由于出现问题之后,数据库被offline这两个文件继续启动运行了几个小时,导致少量block被覆盖,恢复软件直接置空.后续的恢复比较顺利,正常open数据库,然后处理坏块对象(正好不是业务核心表的lob字段,所有部分丢失影响不是非常大).

温馨提醒:
1. 数据文件和备份不要放在同一个阵列上,更不能是同一个分区(卷)上
2. 出现此类问题之后,应当理解停止对该分区的任何写操作,方式丢失或者大小为0的文件被覆盖.

ORA-00704 ORA-00702 恢复

数据库启动报ORA-01092 ORA-00704 ORA-00702错误

SQL> startup
ORACLE instance started.

Total System Global Area 3056513024 bytes
Fixed Size                  2257152 bytes
Variable Size             704646912 bytes
Database Buffers         2332033024 bytes
Redo Buffers               17575936 bytes
Database mounted.
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-00704: bootstrap process failure
ORA-00702: bootstrap verison '' inconsistent with version '8.0.0.0.0'
Process ID: 27344
Session ID: 191 Serial number: 3

bootstrap-ORA-00702


查看alert日志

Mon Apr 09 16:22:34 2018
ALTER DATABASE   MOUNT
Successful mount of redo thread 1, with mount id 1383493834
Database mounted in Exclusive Mode
Lost write protection disabled
Completed: ALTER DATABASE   MOUNT
Mon Apr 09 16:22:39 2018
ALTER DATABASE OPEN
Thread 1 opened at log sequence 3
  Current log# 3 seq# 3 mem# 0: /u01/app/oracle/oradata/orcl/redo03.log
Successful open of redo thread 1
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
SMON: enabling cache recovery
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_27344.trc:
ORA-00704: bootstrap process failure
ORA-00702: bootstrap verison '' inconsistent with version '8.0.0.0.0'
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_27344.trc:
ORA-00704: bootstrap process failure
ORA-00702: bootstrap verison '' inconsistent with version '8.0.0.0.0'
Error 704 happened during db open, shutting down database
USER (ospid: 27344): terminating the instance due to error 704
Instance terminated by USER, pid = 27344
ORA-1092 signalled during: ALTER DATABASE OPEN...
opiodr aborting process unknown ospid (27344) as a result of ORA-1092
Mon Apr 09 16:22:40 2018
ORA-1092 : opitsk aborting process

错误比较明显是由于数据库open过程中bootstrap异常导致,出现此类错误一般是由于软件介质和db不匹配或者bootstrap表的block故障导致.

官方说明

Versions 9.2, 10.1, 10.2, 11.1, 11.2, 12.1

Error:  ORA-00702 bootstrap verison '%s' inconsistent with version '%s' 
---------------------------------------------------------------------------
Cause:  The reading version of the boostrap is incompatible with the current 
	bootstrap version. 
Action: Restore a version of the software that is compatible with this 
	bootstrap version

由于bootstrap$以及其中相关表处理比较特殊,如果您遭遇此类bootstrap$相关异常无法解决,需要恢复支持,请联系我们
Phone:13429648788    Q Q:107644445QQ咨询惜分飞    E-Mail:dba@xifenfei.com

ORA-00600: internal error code, arguments: [16703], [1403], [20]

继续上篇的tab$被清空(ORA-600 16703故障解析—tab$表被清空),导致数据库启动异常的case
数据库日志分析
数据库open成功同时报ORA-7445 kokeicbegin和ORA-600 kzrini:!uprofile错误
ora-600-kzrini-uprofile


再次启动数据库直接报ORA-600 16703错误
ora-600-16703

这里有一个其他现象,就是数据库在open成功的同时(同一秒内),开始报异常.重启之后直接报
ORA-00704: bootstrap process failure
ORA-00704: bootstrap process failure
ORA-00600: internal error code, arguments: [16703], [1403], [20], [], [], [], [], [], [], [], [], []
根据10046分析结果

=====================
select rowcnt,blkcnt,empcnt,avgspc,chncnt,avgrln,nvl(degree,1), nvl(instances,1) from tab$ where obj# = :1
END OF STMT
PARSE #140048443935120:c=0,e=390,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=0,tim=1499185905161433
=====================
select blevel, leafcnt, distkey, lblkkey, dblkkey, clufac,        nvl(degree,1), nvl(instances,1) from ind$ where bo# = :1 and obj# = :2
END OF STMT
PARSE #140048443934176:c=1000,e=601,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=0,tim=1499185905162088
=====================
PARSING IN CURSOR #140048443933232 len=70 dep=1 uid=0 oct=3 lid=0 tim=1499185905162444 hv=3377894161 ad='84f13d70' sqlid='32d4jrb4pd4sj'
select charsetid, charsetform from col$  where obj# = :1 and col# = :2
END OF STMT
PARSE #140048443933232:c=0,e=294,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=0,tim=1499185905162443
=====================
PARSING IN CURSOR #140048443932288 len=52 dep=1 uid=0 oct=3 lid=0 tim=1499185905247020 hv=429618617 ad='84f0f2d0' sqlid='4krwuz0ctqxdt'
select ctime, mtime, stime from obj$ where obj# = :1
END OF STMT
PARSE #140048443932288:c=0,e=549,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=0,tim=1499185905247019
BINDS #140048443932288:
select blevel, leafcnt, distkey, lblkkey, dblkkey, clufac,        nvl(degree,1), nvl(instances,1) from ind$ where bo# = :1 and obj# = :2
END OF STMT
PARSE #140048443934176:c=1000,e=601,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=0,tim=1499185905162088
=====================
PARSING IN CURSOR #140048443933232 len=70 dep=1 uid=0 oct=3 lid=0 tim=1499185905162444 hv=3377894161 ad='84f13d70' sqlid='32d4jrb4pd4sj'
select charsetid, charsetform from col$  where obj# = :1 and col# = :2
END OF STMT
PARSE #140048443933232:c=0,e=294,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=0,tim=1499185905162443
=====================
PARSING IN CURSOR #140048443932288 len=52 dep=1 uid=0 oct=3 lid=0 tim=1499185905247020 hv=429618617 ad='84f0f2d0' sqlid='4krwuz0ctqxdt'
select ctime, mtime, stime from obj$ where obj# = :1
END OF STMT
PARSE #140048443932288:c=0,e=549,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=0,tim=1499185905247019
BINDS #140048443932288:
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=7f5f91b87bd0  bln=22  avl=02  flg=05
  value=20
EXEC #140048443932288:c=2000,e=2686,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=1218588913,tim=1499185905249810
WAIT #140048443932288: nam='db file sequential read' ela= 6205 file#=1 block#=337 blocks=1 obj#=36 tim=1499185905256132
WAIT #140048443932288: nam='db file sequential read' ela= 3739 file#=1 block#=338 blocks=1 obj#=36 tim=1499185905266704
WAIT #140048443932288: nam='db file sequential read' ela= 4966 file#=1 block#=241 blocks=1 obj#=18 tim=1499185905271761
FETCH #140048443932288:c=0,e=21976,p=3,cr=3,cu=0,mis=0,r=1,dep=1,og=4,plh=1218588913,tim=1499185905271820
STAT #140048443932288 id=1 cnt=1 pid=0 pos=1 obj=18 op='TABLE ACCESS BY INDEX ROWID OBJ$ (cr=3 pr=3 pw=0 time=21993 us)'
STAT #140048443932288 id=2 cnt=1 pid=1 pos=1 obj=36 op='INDEX RANGE SCAN I_OBJ1 (cr=2 pr=2 pw=0 time=16923 us)'
CLOSE #140048443932288:c=0,e=63,dep=1,type=0,tim=1499185905271941
BINDS #140048443935120:
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=7f5f91c07de8  bln=22  avl=02  flg=05
  value=20
EXEC #140048443935120:c=1000,e=795,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=2970138452,tim=1499185905272802
WAIT #140048443935120: nam='db file sequential read' ela= 3197 file#=1 block#=169 blocks=1 obj#=3 tim=1499185905276069
WAIT #140048443935120: nam='db file sequential read' ela= 3459 file#=1 block#=170 blocks=1 obj#=3 tim=1499185905404084
WAIT #140048443935120: nam='db file sequential read' ela= 6358 file#=1 block#=145 blocks=1 obj#=4 tim=1499185905410548
FETCH #140048443935120:c=999,e=137805,p=3,cr=3,cu=0,mis=0,r=0,dep=1,og=4,plh=2970138452,tim=1499185905410635
STAT #140048443935120 id=1 cnt=0 pid=0 pos=1 obj=4 op='TABLE ACCESS CLUSTER TAB$ (cr=3 pr=3 pw=0 time=137810 us)'
STAT #140048443935120 id=2 cnt=1 pid=1 pos=1 obj=3 op='INDEX UNIQUE SCAN I_OBJ# (cr=2 pr=2 pw=0 time=131330 us)'
 
*** 2017-07-05 00:31:46.094
Incident 176395 created, dump file: /oracle/diag/rdbms/orcl/orcl2/incident/incdir_176395/orcl_ora_51261_i176395.trc
ORA-00600: internal error code, arguments: [16703], [1403], [20], [], [], [], [], [], [], [], [], []

以及以往恢复经验和mos,基本上可以确定是由于tab$和obj$记录不匹配导致该问题.而且是obj#=20的记录在tab$和obj$中不匹配.

分析tab$和obj$记录

Data UnLoader: 11.2.0.1.5 - Internal Only - on Wed Jul 05 01:28:53 2017
with 64-bit io functions and the decompression option
 
Copyright (c) 1994 2017 Bernard van Duijnen All rights reserved.
 
 Strictly Oracle Internal Use Only
 
 
Found db_id = 1334610369
Found db_name = orcl
DUL> unload table TAB$( OBJ# number, DATAOBJ# number,
  2      TS# number, FILE# number, BLOCK# number,
  3      BOBJ# number, TAB# number, COLS number, CLUCOLS number,
  4      PCTFREE$ ignore, PCTUSED$ ignore, INITRANS ignore, MAXTRANS ignore,
  5      FLAGS ignore, AUDIT$ ignore, ROWCNT ignore, BLKCNT ignore,
  6      EMPCNT ignore, AVGSPC ignore, CHNCNT ignore, AVGRLN ignore,
  7      AVGSPC_FLB ignore, FLBCNT ignore,
  8      ANALYZETIME ignore, SAMPLESIZE ignore,
  9      DEGREE ignore, INSTANCES ignore,
 10      INTCOLS ignore, KERNELCOLS number, PROPERTY number)
 11      cluster  C_OBJ#(OBJ#)
 12      storage ( tablespace 0 segobjno 2 tabno 1 file 1 block 144);
. unloading table                      TAB$       0 rows unloaded
DUL> unload table OBJ$( OBJ# number, DATAOBJ# number, OWNER# number,
  2      NAME clean varchar2(30), NAMESPACE ignore, SUBNAME clean varchar2(30),
  3      TYPE# number, CTIME ignore, MTIME ignore, STIME ignore,
  4      STATUS ignore, REMOTEOWNER ignore, LINKNAME ignore,
  5      FLAGS ignore, OID$ hexraw)
  6      storage ( tablespace 0 segobjno 18 file 1 block 240);
. unloading table                      OBJ$   89804 rows unloaded
DUL>

这里可以明确表示tab$无记录,obj$有记录,从而启动的过程出现ORA-600 16703错误可以很好解释.由于数据库启动成功和报错几乎同时进行,以及上面看到的tab$记录不存在的情况,初步怀疑是有startup触发器清空tab$表记录
工具分析触发器表trigger$
startup-trigger
这里果然看到一个after startup on database的触发器,名字为DBMS_SUPPORT_DBMONITOR,而它调用的是DBMS_SUPPORT_DBMONITORP存储.

工具分析pl/sql表source$
DBMS_SUPPORT_DBMONITOR


对wraped加密的程序进行解密
DBMS_SUPPORT_DBMONITOR-unwraped

这里可以明确的看到DBMS_SUPPORT_DBMONITORP存储过程备份tab$表到orachk中然后delete tab$表,现在已经明确是由于DBMS_SUPPORT_DBMONITOR触发器在数据库重启之后开始执行调用DBMS_SUPPORT_DBMONITORP程序,如果判断数据库创建时间大于等于300天,便干掉tab$表,实现数据库破坏.

查找DBMS_SUPPORT_DBMONITOR等程序源头
分析相关程序创建时间,通过obj$表的ctime和name来判断
DBMS_SUPPORT_DBMONITOR-ctime
bootstrap-ctime

这里可以看出来DBMS_SUPPORT_DBMONITOR和DBMS_SUPPORT_DBMONITORP的创建时间基本上和数据库核心对象的创建时间相差无几,我们可以大概排除掉,是由于pl sql等工具连接数据库导致该发问题(类似:plsql dev引起的数据库被黑勒索比特币实现原理分析和解决方案),很可能是在dbca创建库的过程中就已经带有了DBMS_SUPPORT_DBMONITOR等程序,如果这样那很可能是由于数据库的安装介质被破坏导致该问题.

分析DBMS_SUPPORT_DBMONITOR来源
prvtsupp
20170711001626

这里已经很清晰,由于prvtsupp.plb被人注入了恶意脚本从而使得数据库被创建了DBMS_SUPPORT_DBMONITOR的触发器和DBMS_SUPPORT_DBMONITORP的存储过程,从而实现了对数据库的而易破坏.
md5

通过md5码对比,可以确定是有人对软件的安装介质进行了破坏,从而实现了恶意代码的注入,从而实现了数据库300天之后重启之后无法正常启动而是出现类似ORA-00600: internal error code, arguments: [16703], [1403], [20], [], [], [], [], [], [], [], [], []的错误.提醒各位一定要从官方途径下载oracle安装介质,如果是从其他互联网途径下载一定要验证md5,确保文件没有被人恶意篡改,造成无可挽回的损坏.如果真的不幸遇到这类问题,请保护现场联系我们
Tel:13429648788    Q Q:107644445QQ咨询惜分飞    E-Mail:dba@xifenfei.com
可以实现业务数据0丢失恢复,最大限度抢救您的数据

asm disk 分区表损坏恢复

有朋友反馈,他们做了xx存储的双活之后,重启主机发现gi无法正常启动,分析发现所有该存储的磁盘分区信息丢失,导致asmlib无法发现磁盘(使用分区做asm disk)
类似如下错误(磁盘分区丢失)

--fdisk -l 显示部分结果
Disk /dev/mapper/datahds1: 1099.5 GB, 1099511627776 bytes
255 heads, 63 sectors/track, 133674 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

--ls -l /dev/mapper/   显示结果无分区信息
lrwxrwxrwx 1 root root      7 May  6 03:44 datahds1 -> ../dm-1
lrwxrwxrwx 1 root root      7 May  6 03:26 datahds2 -> ../dm-3
lrwxrwxrwx 1 root root      7 May  6 03:26 datahds3 -> ../dm-8
lrwxrwxrwx 1 root root      7 May  6 03:26 ocrhds1 -> ../dm-0
lrwxrwxrwx 1 root root      7 May  6 03:26 ocrhds2 -> ../dm-2
lrwxrwxrwx 1 root root      7 May  6 03:26 ocrhds3 -> ../dm-4

asm日志显示

SUCCESS: diskgroup DATADG was mounted
NOTE: Instance updated compatible.asm to 11.2.0.0.0 for grp 3
SUCCESS: diskgroup OCRHDS was mounted
ORA-15032: not all alterations performed
ORA-15017: diskgroup "DATA" cannot be mounted
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "DATA"

分析系统日志

May  6 02:23:27 db2 kernel: sdb: unknown partition table
May  6 02:23:27 db2 kernel: sde: unknown partition table
May  6 02:23:27 db2 kernel: sdc: unknown partition table
May  6 02:23:27 db2 kernel: sdf: unknown partition table
May  6 02:23:27 db2 kernel: sdd: unknown partition table
May  6 02:23:27 db2 kernel: sdj:Dev sdj: unable to read RDB block 0
May  6 02:23:27 db2 kernel: unable to read partition table
May  6 02:23:27 db2 kernel: sdi: sdi1
May  6 02:23:27 db2 kernel: sdk: sdk1
May  6 02:23:27 db2 kernel: sdg: unknown partition table
May  6 02:23:27 db2 kernel: sdl: sdl1
May  6 02:23:27 db2 kernel: sdm:Dev sdm: unable to read RDB block 0
May  6 02:23:27 db2 kernel: unable to read partition table
May  6 02:23:27 db2 kernel: sdo:Dev sdo: unable to read RDB block 0
May  6 02:23:27 db2 kernel: unable to read partition table
May  6 02:23:27 db2 kernel: sdn:Dev sdn: unable to read RDB block 0
May  6 02:23:27 db2 kernel: unable to read partition table
May  6 02:23:27 db2 kernel: sdp:Dev sdp: unable to read RDB block 0
May  6 02:23:27 db2 kernel: unable to read partition table
May  6 02:23:27 db2 kernel: sds:Dev sds: unable to read RDB block 0
May  6 02:23:27 db2 kernel: unable to read partition table
May  6 02:23:27 db2 kernel: sdh:
May  6 02:23:27 db2 kernel: sdt: sdt1
May  6 02:23:27 db2 kernel: sdv:Dev sdv: unable to read RDB block 0
May  6 02:23:27 db2 kernel: unable to read partition table
May  6 02:23:27 db2 kernel: sdq:Dev sdq: unable to read RDB block 0
May  6 02:23:27 db2 kernel: unable to read partition table
May  6 02:23:27 db2 kernel: sd 1:0:1:9: [sdr] Very big device. Trying to use READ CAPACITY(16).
May  6 02:23:27 db2 kernel: sdr:Dev sdr: unable to read RDB block 0
May  6 02:23:27 db2 kernel: unable to read partition table
May  6 02:23:27 db2 kernel: sd 2:0:0:9: [sdab] Very big device. Trying to use READ CAPACITY(16).
May  6 02:23:27 db2 kernel: sdab: unknown partition table
May  6 02:23:27 db2 kernel: sdac: unknown partition table
May  6 02:23:27 db2 kernel: sdw: sdw1
May  6 02:23:27 db2 kernel: sdu:Dev sdu: unable to read RDB block 0
May  6 02:23:27 db2 kernel: unable to read partition table
May  6 02:23:27 db2 kernel: sdx: sdx1
May  6 02:23:27 db2 kernel: sdy: sdy1
May  6 02:23:27 db2 kernel: sdaa: sdaa1
May  6 02:23:27 db2 kernel: sdz: sdz1
May  6 02:23:27 db2 kernel: sdae: unknown partition table
May  6 02:23:27 db2 kernel: sdaf: unknown partition table
May  6 02:23:27 db2 kernel: sdag: unknown partition table
May  6 02:23:27 db2 kernel: sdai:
May  6 02:23:27 db2 kernel: sdah: unknown partition table
May  6 02:23:27 db2 kernel: sdad: unknown partition table
May  6 02:23:28 db2 mcelog: failed to prefill DIMM database from DMI data

这里错误比较明显unknown partition table,磁盘的分区信息损坏.使用fdisk无法发现分区

partprobe也无效

[root@db2 oracle]# partprobe /dev/mapper/ocrhds3
[root@db2 oracle]# 
[root@db2 oracle]# ls -l /dev/mapper/ocrhds3*
lrwxrwxrwx 1 root root 7 May  6 07:30 /dev/mapper/ocrhds3 -> ../dm-4

从尚需信息看,磁盘的分区表信息应该已经损坏,现在能够做的,就是希望运气好,磁盘的分区的实际数据没有损坏

分析磁盘实际分区数据

[root@db2 ~]$ dd if=/dev/mapper/datahds1 of=/tmp/datahds1.dd bs=1024k count=50
[root@db2 ~]$ dd if=/tmp/datahds1.dd of=/tmp/xff01.dd  bs=3225 skip=1
[grid@db2 ~]$ kfed read /tmp/xff01.dd |more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  3110278718 ; 0x00c: 0xb963163e
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr: ORCLDISKHDSDATA1 ; 0x000: length=16
kfdhdb.driver.reserved[0]:   1146307656 ; 0x008: 0x44534448
kfdhdb.driver.reserved[1]:    826364993 ; 0x00c: 0x31415441
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                186646528 ; 0x020: 0x0b200000
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        1 ; 0x026: KFDGTP_EXTERNAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:             DATADG_0000 ; 0x028: length=11
kfdhdb.grpname:                  DATADG ; 0x048: length=6
kfdhdb.fgname:              DATADG_0000 ; 0x068: length=11
kfdhdb.capname:                         ; 0x088: length=0
kfdhdb.crestmp.hi:             33050696 ; 0x0a8: HOUR=0x8 DAYS=0x2 MNTH=0x4 YEAR=0x7e1
kfdhdb.crestmp.lo:           3813740544 ; 0x0ac: USEC=0x0 MSEC=0x44 SECS=0x35 MINS=0x38
kfdhdb.mntstmp.hi:             33050701 ; 0x0b0: HOUR=0xd DAYS=0x2 MNTH=0x4 YEAR=0x7e1
kfdhdb.mntstmp.lo:            411385856 ; 0x0b4: USEC=0x0 MSEC=0x150 SECS=0x8 MINS=0x6

通过上述分析,我们可以初步判断,分区磁盘的信息很可能是好的(因为asm disk header是好的,根据一般的规则从前往后覆盖,既然header是好的,后面的block被覆盖的概率非常小)

通过准备新磁盘直接把磁盘分区dd到新设备上

dd if=/dev/mapper/ocrhds1 of=/dev/mapper/ocrhdsnew1 skip=1 bs=3225
dd if=/dev/mapper/ocrhds2 of=/dev/mapper/ocrhdsnew2 skip=1 bs=3225
dd if=/dev/mapper/ocrhds3 of=/dev/mapper/ocrhdsnew3 skip=1 bs=3225
dd if=/dev/mapper/datahds1 of=/dev/mapper/datahdsnew1 skip=1 bs=3225
dd if=/dev/mapper/datahds2 of=/dev/mapper/datahdsnew2 skip=1 bs=3225
dd if=/dev/mapper/datahds3 of=/dev/mapper/datahdsnew3 skip=1 bs=3225

asmlib重新扫描磁盘

[root@db1 disks]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "HDSOCR3"
Instantiating disk "HDSDATA2"
Instantiating disk "HDSDATA1"
Instantiating disk "HDSDATA3"
Instantiating disk "HDSOCR1"
Instantiating disk "HDSOCR2"
[root@db1 disks]# ls -ltr
total 0
brw-rw---- 1 grid asmadmin  8, 160 May  6 13:49 HDSOCR3
brw-rw---- 1 grid asmadmin  8, 192 May  6 13:49 HDSDATA2
brw-rw---- 1 grid asmadmin  8, 176 May  6 13:49 HDSDATA1
brw-rw---- 1 grid asmadmin  8, 208 May  6 13:49 HDSDATA3
brw-rw---- 1 grid asmadmin  8, 128 May  6 13:49 HDSOCR1
brw-rw---- 1 grid asmadmin  8, 144 May  6 13:49 HDSOCR2

kfed验证拷贝的分区

[root@db2 tmp]# /oracle/app/11.2.0/grid_1/bin/kfed read /dev/oracleasm/disks/HDSDATA1
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  3110278718 ; 0x00c: 0xb963163e
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr: ORCLDISKHDSDATA1 ; 0x000: length=16
kfdhdb.driver.reserved[0]:   1146307656 ; 0x008: 0x44534448
kfdhdb.driver.reserved[1]:    826364993 ; 0x00c: 0x31415441
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                186646528 ; 0x020: 0x0b200000
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        1 ; 0x026: KFDGTP_EXTERNAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:             DATADG_0000 ; 0x028: length=11
kfdhdb.grpname:                  DATADG ; 0x048: length=6
kfdhdb.fgname:              DATADG_0000 ; 0x068: length=11
kfdhdb.capname:                         ; 0x088: length=0

asm和数据库启动正常

[grid@db2 ~]$ asmcmd
ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576   3145710  2378034                0         2378034              0             N  DATADG/
MOUNTED  NORMAL  N         512   4096  1048576     15342    14416             5114            4651              0             Y  OCRHDS/
ASMCMD> 

[oracle@db2 ~]$ sqlplus  / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sat May 6 13:54:21 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 3.6077E+10 bytes
Fixed Size                  2260648 bytes
Variable Size            7247757656 bytes
Database Buffers         2.8723E+10 bytes
Redo Buffers              104382464 bytes
Database mounted.
Database opened.
SQL> 

asm-disk-partition-lost-recovery


通过上述恢复,实现asm磁盘分区丢失数据0丢失

asm 加磁盘导致磁盘组损坏恢复

接到客户恢复case请求,希望我们接入恢复数据。大概过程是这样的,16年9月份由于硬件问题,导致normal磁盘组(只有2个磁盘)中的一个磁盘丢失,然后在17年3月6日,运维方尝试增加该磁盘进入磁盘组,结果通过force命令加入成功之后,磁盘组dismount,然后再也无法mount成功。
磁盘组创建信息

Fri Jun 24 19:31:38 2016
NOTE: Instance updated compatible.asm to 11.2.0.0.0 for grp 2
SUCCESS: diskgroup DATADG was mounted
SUCCESS: CREATE DISKGROUP DATADG NORMAL REDUNDANCY  DISK '/dev/asm-diskdata01' SIZE 1048576M ,
'/dev/asm-diskdata02' SIZE 1048576M  ATTRIBUTE 'compatible.asm'='11.2.0.0.0','au_size'='4M' /* ASMCA */

这里可以看出来datadg是一个normal的au为4M的一个磁盘组

自动drop异常asm disk

Mon Sep 12 11:41:54 2016
WARNING: Waited 15 secs for write IO to PST disk 1 in group 1.
WARNING: Waited 15 secs for write IO to PST disk 1 in group 1.
Mon Sep 12 11:41:55 2016
NOTE: process _b000_+asm1 (19491) initiating offline of disk 1.3915923833 (DATADG_0001) with mask 0x7e in group 1
NOTE: checking PST: grp = 1
GMON checking disk modes for group 1 at 9 for pid 29, osid 19491
NOTE: group DATADG: updated PST location: disk 0000 (PST copy 0)
NOTE: checking PST for grp 1 done.
NOTE: sending set offline flag message 2870990318 to 1 disk(s) in group 1
WARNING: Disk DATADG_0001 in mode 0x7f is now being offlined
NOTE: initiating PST update: grp = 1, dsk = 1/0xe9684179, mask = 0x6a, op = clear
GMON updating disk modes for group 1 at 10 for pid 29, osid 19491
NOTE: group DATADG: updated PST location: disk 0000 (PST copy 0)
NOTE: group DATADG: updated PST location: disk 0000 (PST copy 0)
NOTE: PST update grp = 1 completed successfully 
NOTE: initiating PST update: grp = 1, dsk = 1/0xe9684179, mask = 0x7e, op = clear
GMON updating disk modes for group 1 at 11 for pid 29, osid 19491
NOTE: group DATADG: updated PST location: disk 0000 (PST copy 0)
NOTE: group DATADG: updated PST location: disk 0000 (PST copy 0)
NOTE: cache closing disk 1 of grp 1: DATADG_0001
NOTE: PST update grp = 1 completed successfully 
Mon Sep 12 11:42:55 2016
WARNING: Waited 15 secs for write IO to PST disk 0 in group 1.
Mon Sep 12 11:44:58 2016
WARNING: PST-initiated drop of 1 disk(s) in group 1(.1137226115))
SQL> alter diskgroup DATADG drop disk DATADG_0001 force /* ASM SERVER */ 
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=1
Mon Sep 12 11:44:59 2016
GMON updating for reconfiguration, group 1 at 12 for pid 29, osid 19491
NOTE: cache closing disk 1 of grp 1: (not open) DATADG_0001
NOTE: group DATADG: updated PST location: disk 0000 (PST copy 0)
NOTE: group 1 PST updated.
Mon Sep 12 11:44:59 2016
NOTE: membership refresh pending for group 1/0x43c8b183 (DATADG)
Mon Sep 12 11:45:02 2016
NOTE: successfully read ACD block gn=1 blk=0 via retry read
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_lgwr_3526.trc:
ORA-15062: ASM disk is globally closed
GMON querying group 1 at 13 for pid 18, osid 3532
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATADG
SUCCESS: refreshed membership for 1/0x43c8b183 (DATADG)
SUCCESS: alter diskgroup DATADG drop disk DATADG_0001 force /* ASM SERVER */
NOTE: starting rebalance of group 1/0x43c8b183 (DATADG) at power 1
SUCCESS: PST-initiated drop disk in group 1(1137226115))
Starting background process ARB0
Mon Sep 12 11:45:03 2016
ARB0 started with pid=35, OS id=19945 
NOTE: assigning ARB0 to group 1/0x43c8b183 (DATADG) with 1 parallel I/O
cellip.ora not found.
NOTE: Rebalance has restored redundancy for any existing control file or redo log in disk group DATADG
NOTE: Attempting voting file refresh on diskgroup DATADG
NOTE: Refresh completed on diskgroup DATADG. No voting file found.
Mon Sep 12 11:46:21 2016
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=1
Mon Sep 12 11:46:24 2016
GMON updating for reconfiguration, group 1 at 14 for pid 36, osid 20110
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATADG
NOTE: group 1 PST updated.
WARNING: offline disk number 1 has references (54679 AUs)
Mon Sep 12 11:46:24 2016
NOTE: membership refresh pending for group 1/0x43c8b183 (DATADG)
GMON querying group 1 at 15 for pid 18, osid 3532
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATADG
SUCCESS: refreshed membership for 1/0x43c8b183 (DATADG)
NOTE: Attempting voting file refresh on diskgroup DATADG
NOTE: Refresh completed on diskgroup DATADG. No voting file found.
NOTE: stopping process ARB0
SUCCESS: rebalance completed for group 1/0x43c8b183 (DATADG)

这里我们可以看出来磁盘组在2016年9月12日由于disk 1 无法响应,直接被asm 踢出了磁盘组

把被强制删除的磁盘重新加回去

Mon Mar 06 15:36:54 2017
SQL> alter diskgroup DATADG add disk '/dev/asm-diskdata01' name DATADG_0000 
NOTE: GroupBlock outside rolling migration privileged region
ORA-15032: not all alterations performed
ORA-15029: disk '/dev/asm-diskdata01' is already mounted by this instance
ERROR: alter diskgroup DATADG add disk '/dev/asm-diskdata01' name DATADG_0000
Mon Mar 06 15:38:27 2017
SQL>  alter diskgroup DATADG add disk '/dev/asm-diskdata02' name DATADG_0001 
NOTE: GroupBlock outside rolling migration privileged region
NOTE: Assigning number (1,2) to disk (/dev/asm-diskdata02)
NOTE: requesting all-instance membership refresh for group=1
NOTE: Disk DATADG_0001 in mode 0x7f marked for de-assignment
ERROR: ORA-15033 signalled during reconfiguration of diskgroup DATADG
Mon Mar 06 15:38:28 2017
NOTE: membership refresh pending for group 1/0x31584f6b (DATADG)
Mon Mar 06 15:38:31 2017
GMON querying group 1 at 7 for pid 18, osid 3468
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATADG
GMON querying group 1 at 8 for pid 18, osid 3468
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATADG
SUCCESS: refreshed membership for 1/0x31584f6b (DATADG)
ORA-15032: not all alterations performed
ORA-15033: disk '/dev/asm-diskdata02' belongs to diskgroup "DATADG"
ERROR:  alter diskgroup DATADG add disk '/dev/asm-diskdata02' name DATADG_0001
NOTE: Attempting voting file refresh on diskgroup DATADG
NOTE: Refresh completed on diskgroup DATADG. No voting file found.
Mon Mar 06 16:04:14 2017
SQL> alter diskgroup DATADG add disk '/dev/asm-diskdata02' name DATADG_0001 
NOTE: GroupBlock outside rolling migration privileged region
NOTE: Assigning number (1,2) to disk (/dev/asm-diskdata02)
NOTE: requesting all-instance membership refresh for group=1
NOTE: Disk DATADG_0001 in mode 0x7f marked for de-assignment
ERROR: ORA-15033 signalled during reconfiguration of diskgroup DATADG
Mon Mar 06 16:04:15 2017
NOTE: membership refresh pending for group 1/0x31584f6b (DATADG)
Mon Mar 06 16:04:18 2017
GMON querying group 1 at 9 for pid 18, osid 3468
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATADG
GMON querying group 1 at 10 for pid 18, osid 3468
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATADG
SUCCESS: refreshed membership for 1/0x31584f6b (DATADG)
ORA-15032: not all alterations performed
ORA-15033: disk '/dev/asm-diskdata02' belongs to diskgroup "DATADG"
ERROR: alter diskgroup DATADG add disk '/dev/asm-diskdata02' name DATADG_0001
NOTE: Attempting voting file refresh on diskgroup DATADG
NOTE: Refresh completed on diskgroup DATADG. No voting file found.
Mon Mar 06 16:23:28 2017
SQL> alter diskgroup DATADG add FAILGROUP DATA_0001 disk '/dev/adm-diskdata02' name DATA_0001 
NOTE: GroupBlock outside rolling migration privileged region
ORA-15032: not all alterations performed
ORA-15031: disk specification '/dev/adm-diskdata02' matches no disks
ERROR: alter diskgroup DATADG add FAILGROUP DATA_0001 disk '/dev/adm-diskdata02' name DATA_0001
Mon Mar 06 16:24:48 2017
SQL> alter diskgroup DATADG add FAILGROUP DATA_0001 disk '/dev/asm-diskdata02' name DATA_0001 
NOTE: GroupBlock outside rolling migration privileged region
NOTE: Assigning number (1,2) to disk (/dev/asm-diskdata02)
NOTE: requesting all-instance membership refresh for group=1
NOTE: Disk DATA_0001 in mode 0x7f marked for de-assignment
ERROR: ORA-15033 signalled during reconfiguration of diskgroup DATADG
Mon Mar 06 16:24:49 2017
NOTE: membership refresh pending for group 1/0x31584f6b (DATADG)
Mon Mar 06 16:24:52 2017
GMON querying group 1 at 11 for pid 18, osid 3468
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATADG
GMON querying group 1 at 12 for pid 18, osid 3468
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATADG
SUCCESS: refreshed membership for 1/0x31584f6b (DATADG)
ORA-15032: not all alterations performed
ORA-15033: disk '/dev/asm-diskdata02' belongs to diskgroup "DATADG"
ERROR: alter diskgroup DATADG add FAILGROUP DATA_0001 disk '/dev/asm-diskdata02' name DATA_0001
NOTE: Attempting voting file refresh on diskgroup DATADG
NOTE: Refresh completed on diskgroup DATADG. No voting file found.
Mon Mar 06 16:26:07 2017
SQL> alter diskgroup DATADG add FAILGROUP DATA_0001 disk '/dev/asm-diskdata02' name DATA_0001 force  
NOTE: GroupBlock outside rolling migration privileged region
NOTE: Assigning number (1,2) to disk (/dev/asm-diskdata02)
NOTE: requesting all-instance membership refresh for group=1
NOTE: initializing header on grp 1 disk DATA_0001
NOTE: requesting all-instance disk validation for group=1
Mon Mar 06 16:26:10 2017
NOTE: skipping rediscovery for group 1/0x31584f6b (DATADG) on local instance.
NOTE: requesting all-instance disk validation for group=1
NOTE: skipping rediscovery for group 1/0x31584f6b (DATADG) on local instance.
Mon Mar 06 16:26:15 2017
GMON updating for reconfiguration, group 1 at 13 for pid 28, osid 12861
NOTE: group 1 PST updated.
NOTE: initiating PST update: grp = 1
GMON updating group 1 at 14 for pid 28, osid 12861
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATADG
NOTE: group DATADG: updated PST location: disk 0000 (PST copy 0)
NOTE: group DATADG: updated PST location: disk 0002 (PST copy 1)
NOTE: PST update grp = 1 completed successfully 
NOTE: membership refresh pending for group 1/0x31584f6b (DATADG)
GMON querying group 1 at 15 for pid 18, osid 3468
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATADG
NOTE: cache opening disk 2 of grp 1: DATA_0001 path:/dev/asm-diskdata02
NOTE: Attempting voting file refresh on diskgroup DATADG
NOTE: Refresh completed on diskgroup DATADG. No voting file found.
GMON querying group 1 at 16 for pid 18, osid 3468
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATADG
SUCCESS: refreshed membership for 1/0x31584f6b (DATADG)
Mon Mar 06 16:26:19 2017
SUCCESS: alter diskgroup DATADG add FAILGROUP DATA_0001 disk '/dev/asm-diskdata02' name DATA_0001 force 
NOTE: starting rebalance of group 1/0x31584f6b (DATADG) at power 1
Mon Mar 06 16:26:20 2017
Starting background process ARB0
Mon Mar 06 16:26:20 2017
ARB0 started with pid=32, OS id=25833 
NOTE: assigning ARB0 to group 1/0x31584f6b (DATADG) with 1 parallel I/O
cellip.ora not found.
WARNING:cache read  a corrupt block: group=1(DATADG) dsk=0 blk=0 disk=0 
        (DATADG_0000)incarn=3915956130 au=0 blk=0 count=1
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_25833.trc:
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483648] [0] [0 != 1]
NOTE:a corrupted block from group DATADG was dumped to /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_25833.trc
WARNING:cache read (retry)a corrupt block:group=1(DATADG) 
         dsk=0 blk=0 disk=0(DATADG_0000)incarn=3915956130 au=0 blk=0 count=1
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_25833.trc:
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483648] [0] [0 != 1]
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483648] [0] [0 != 1]
ERROR: cache failed to read group=1(DATADG) dsk=0 blk=0 from disk(s): 0(DATADG_0000)
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483648] [0] [0 != 1]
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483648] [0] [0 != 1]
NOTE: cache initiating offline of disk 0 group DATADG
NOTE:process _arb0_+asm1 (25833) initiating offline of disk 0.3915956130(DATADG_0000)with mask 0x7e in group 1
NOTE: checking PST: grp = 1
GMON checking disk modes for group 1 at 17 for pid 32, osid 25833
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATADG
ERROR: too many offline disks in PST (grp 1)
NOTE: checking PST for grp 1 done.
NOTE: initiating PST update: grp = 1, dsk = 0/0xe968bfa2, mask = 0x6a, op = clear
GMON updating disk modes for group 1 at 18 for pid 32, osid 25833
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATADG
ERROR: Disk 0 cannot be offlined, since all the disks [0, 1] with mirrored data would be offline.
ERROR: too many offline disks in PST (grp 1)
Mon Mar 06 16:26:23 2017
NOTE: cache dismounting (not clean) group 1/0x31584F6B (DATADG) 
NOTE: messaging CKPT to quiesce pins Unix process pid: 25889, image: oracle@DBN01 (B000)
Mon Mar 06 16:26:23 2017
NOTE: halting all I/Os to diskgroup 1 (DATADG)
Mon Mar 06 16:26:23 2017
NOTE: LGWR doing non-clean dismount of group 1 (DATADG)
NOTE: LGWR sync ABA=19.2851 last written ABA 19.2851
WARNING: Offline for disk DATADG_0000 in mode 0x7f failed.
Mon Mar 06 16:26:23 2017
kjbdomdet send to inst 2
detach from dom 1, sending detach message to inst 2
Mon Mar 06 16:26:23 2017
List of instances:
 1 2
Dirty detach reconfiguration started (new ddet inc 1, cluster inc 8)
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_25833.trc  (incident=65537):
ORA-15335: ASM metadata corruption detected in disk group 'DATADG'
ORA-15130: diskgroup "DATADG" is being dismounted
ORA-15066: offlining disk "DATADG_0000" in group "DATADG" may result in a data loss
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483648] [0] [0 != 1]
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483648] [0] [0 != 1]
Incident details in:/u01/app/grid/diag/asm/+asm/+ASM1/incident/incdir_65537/+ASM1_arb0_25833_i65537.trc
 Global Resource Directory partially frozen for dirty detach
* dirty detach - domain 1 invalid = TRUE 
 3189 GCS resources traversed, 0 cancelled
Dirty Detach Reconfiguration complete
ERROR: ORA-15130 in COD recovery for diskgroup 1/0x31584f6b (DATADG)
ERROR: ORA-15130 thrown in RBAL for group number 1
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_rbal_3468.trc:
ORA-15130: diskgroup "DATADG" is being dismounted
Mon Mar 06 16:26:23 2017
WARNING: dirty detached from domain 1
NOTE: cache dismounted group 1/0x31584F6B (DATADG) 

---后续mount报错
SQL> ALTER DISKGROUP DATADG MOUNT  /* asm agent *//* {1:18003:2} */ 
NOTE: cache registered group DATADG number=1 incarn=0xb368408f
NOTE: cache began mount (first) of group DATADG number=1 incarn=0xb368408f
NOTE: Assigning number (1,2) to disk (/dev/asm-diskdata02)
WARNING:GMON has insufficient disks to maintain consensus.
Minimum required is 2:updating 1 PST copies from a total of 2.
ERROR: GMON failed to obtain a quorum ofsupporting disks in group 1
NOTE: cache dismounting (clean) group 1/0xB368408F (DATADG) 
NOTE: messaging CKPT to quiesce pins Unix process pid: 27651, image: oracle@DBN01 (TNS V1-V3)
NOTE: dbwr not being msg'd to dismount
NOTE: lgwr not being msg'd to dismount
NOTE: cache dismounted group 1/0xB368408F (DATADG) 
NOTE: cache ending mount (fail) of group DATADG number=1 incarn=0xb368408f
NOTE: cache deleting context for group DATADG 1/0xb368408f
GMON dismounting group 1 at 12 for pid 30, osid 27651
NOTE: Disk DATA_0001 in mode 0x9 marked for de-assignment
ERROR: diskgroup DATADG was not mounted
ORA-15032: not all alterations performed
ORA-15017: diskgroup "DATADG" cannot be mounted
ORA-15315: Write errors in disk group DATADG could lead to inconsistent ASM metadata.
ERROR: ALTER DISKGROUP DATADG MOUNT  /* asm agent *//* {1:18003:2} */

从这里我们可以看出来,前几次加asm disk 由于各种原因都失败了,最后一次通过加force关键字,使得被自动drop的disk重新强制加到datadg里面.可悲的是在加入成功之后,开始做rebalance的时候,发现disk 0出现坏块,从而引起ORA-15196的错误,使得rebalance无法进行下去,进而整个asm 磁盘组datadg自动dismount.后面再次尝试mount datadg的时候,直接提示元数据库不一致,因为disk 0 的磁盘头已经异常.

通过kfed分析disk 0信息
这里是通过dd命令备份的磁盘头到win进行分析的,以前正常的disk 0的磁盘头损坏(全0)
asm-kfed


对于这个故障已经比较清楚,恢复思路也基本上确定:依次递进
方案1:通过kfed修改文件头,然后尝试mount磁盘头手工修复ASM DISK HEADER 异常
方案2:直接通过amdu,dul之类的工具拷贝出来数据文件找回ASM中数据文件
方案3:通过底层au重组出来数据文件asm disk header 彻底损坏恢复
在我们的实际恢复中运气比较好,通过方案1就完成了恢复工作,通过kfed修复磁盘头之后,然后报错如下

SQL> alter diskgroup DATADG mount 
NOTE: cache registered group DATADG number=1 incarn=0x5134d0d4
NOTE: cache began mount (first) of group DATADG number=1 incarn=0x5134d0d4
NOTE: Assigning number (1,2) to disk (/dev/asm-diskdata02)
NOTE: Assigning number (1,0) to disk (/dev/asm-diskdata01)
Tue Mar 07 19:03:40 2017
NOTE: GMON heartbeating for grp 1
GMON querying group 1 at 27 for pid 28, osid 13837
NOTE: Assigning number (1,1) to disk ()
GMON querying group 1 at 28 for pid 28, osid 13837
NOTE: cache closing disk 1 of grp 1: (not open) 
NOTE: cache opening disk 0 of grp 1: DATADG_0000 path:/dev/asm-diskdata01
NOTE: F1X0 found on disk 0 au 2 fcn 0.178802
NOTE: cache opening disk 2 of grp 1: DATA_0001 path:/dev/asm-diskdata02
NOTE: cache mounting (first) normal redundancy group 1/0x5134D0D4 (DATADG)
Tue Mar 07 19:03:40 2017
* allocate domain 1, invalid = TRUE 
kjbdomatt send to inst 2
Tue Mar 07 19:03:40 2017
NOTE: attached to recovery domain 1
NOTE: starting recovery of thread=1 ckpt=19.2851 group=1 (DATADG)
NOTE: starting recovery of thread=2 ckpt=13.5327 group=1 (DATADG)
NOTE: advancing ckpt for group 1 (DATADG) thread=2 ckpt=13.5327
NOTE: advancing ckpt for group 1 (DATADG) thread=1 ckpt=19.2852
NOTE: cache recovered group 1 to fcn 0.365868
NOTE: redo buffer size is 512 blocks (2101760 bytes)
Tue Mar 07 19:03:40 2017
NOTE: LGWR attempting to mount thread 1 for diskgroup 1 (DATADG)
NOTE: LGWR found thread 1 closed at ABA 19.2851
NOTE: LGWR mounted thread 1 for diskgroup 1 (DATADG)
NOTE: LGWR opening thread 1 at fcn 0.365868 ABA 20.2852
NOTE: cache mounting group 1/0x5134D0D4 (DATADG) succeeded
NOTE: cache ending mount (success) of group DATADG number=1 incarn=0x5134d0d4
Tue Mar 07 19:03:40 2017
NOTE: Instance updated compatible.asm to 11.2.0.0.0 for grp 1
SUCCESS: diskgroup DATADG was mounted
SUCCESS: alter diskgroup DATADG mount
Tue Mar 07 19:03:40 2017
NOTE: diskgroup resource ora.DATADG.dg is online
Tue Mar 07 19:03:41 2017
ASM Health Checker found 1 new failures
NOTE: ASM did background COD recovery for group 1/0x5134d0d4 (DATADG)
NOTE: starting rebalance of group 1/0x5134d0d4 (DATADG) at power 1
Starting background process ARB0
Tue Mar 07 19:03:42 2017
ARB0 started with pid=30, OS id=13905 
NOTE: assigning ARB0 to group 1/0x5134d0d4 (DATADG) with 1 parallel I/O
cellip.ora not found.
WARNING: cache read  a corrupt block: group=1(DATADG) dsk=0 blk=0 disk=0 
         (DATADG_0000) incarn=2202280062 au=0 blk=0 count=1
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_13905.trc:
ORA-15196: invalid ASM block header [kfc.c:26368] [blk_kfbl] [2147483648] [0] [1022 != 0]
NOTE: a corrupted block from group DATADG was dumped to /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_13905.trc
WARNING:cache read (retry)a corrupt block:group=1(DATADG) dsk=0 blk=0 disk=0
        (DATADG_0000)incarn=2202280062 au=0 blk=0 count=1
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_13905.trc:
ORA-15196: invalid ASM block header [kfc.c:26368] [blk_kfbl] [2147483648] [0] [1022 != 0]
ORA-15196: invalid ASM block header [kfc.c:26368] [blk_kfbl] [2147483648] [0] [1022 != 0]
ERROR: cache failed to read group=1(DATADG) dsk=0 blk=0 from disk(s): 0(DATADG_0000)
ORA-15196: invalid ASM block header [kfc.c:26368] [blk_kfbl] [2147483648] [0] [1022 != 0]
ORA-15196: invalid ASM block header [kfc.c:26368] [blk_kfbl] [2147483648] [0] [1022 != 0]
Tue Mar 07 19:03:52 2017
NOTE: client oradb1:oradb registered, osid 13989, mbr 0x1
NOTE: cache initiating offline of disk 0 group DATADG
NOTE:process _arb0_+asm1 (13905) initiating offline of disk 0.2202280062(DATADG_0000)with mask 0x7e in group 1
NOTE: checking PST: grp = 1
Tue Mar 07 19:03:52 2017
GMON checking disk modes for group 1 at 30 for pid 30, osid 13905
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATADG
ERROR: too many offline disks in PST (grp 1)
NOTE: checking PST for grp 1 done.
NOTE: initiating PST update: grp = 1, dsk = 0/0x8344207e, mask = 0x6a, op = clear
GMON updating disk modes for group 1 at 31 for pid 30, osid 13905
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATADG
ERROR: Disk 0 cannot be offlined, since all the disks [0, 1] with mirrored data would be offline.
ERROR: too many offline disks in PST (grp 1)
Tue Mar 07 19:03:52 2017
NOTE: cache dismounting (not clean) group 1/0x5134D0D4 (DATADG) 
WARNING: Offline for disk DATADG_0000 in mode 0x7f failed.
Tue Mar 07 19:03:52 2017
NOTE: halting all I/Os to diskgroup 1 (DATADG)
NOTE: messaging CKPT to quiesce pins Unix process pid: 14002, image: oracle@DBN01 (B000)
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_13905.trc  (incident=76402):
ORA-15335: ASM metadata corruption detected in disk group 'DATADG'
ORA-15130: diskgroup "DATADG" is being dismounted
ORA-15066: offlining disk "DATADG_0000" in group "DATADG" may result in a data loss
ORA-15196: invalid ASM block header [kfc.c:26368] [blk_kfbl] [2147483648] [0] [1022 != 0]
ORA-15196: invalid ASM block header [kfc.c:26368] [blk_kfbl] [2147483648] [0] [1022 != 0]
Incident details in: /u01/app/grid/diag/asm/+asm/+ASM1/incident/incdir_76402/+ASM1_arb0_13905_i76402.trc
Tue Mar 07 19:03:52 2017
NOTE: LGWR doing non-clean dismount of group 1 (DATADG)
NOTE: LGWR sync ABA=20.2857 last written ABA 20.2857

这里比较比较幸运,datadg已经mount成功了,但是由于rab依旧读取到disk header异常信息(没有完全修复成功,而且在日志中不光这个block异常,还有其他block异常,因此不考虑进一步修复),因此直接通过屏蔽asm的acd和cod实现该磁盘组mount,而且不会dismount。

SQL> alter diskgroup DATADG mount 
NOTE: cache registered group DATADG number=1 incarn=0x9c94d0eb
NOTE: cache began mount (first) of group DATADG number=1 incarn=0x9c94d0eb
NOTE: Assigning number (1,2) to disk (/dev/asm-diskdata02)
NOTE: Assigning number (1,0) to disk (/dev/asm-diskdata01)
NOTE: skip COD recovery as part of test at kfrc.c:1639 
NOTE: skip COD recovery as part of test at kfrc.c:1639 
Tue Mar 07 19:12:45 2017
NOTE: GMON heartbeating for grp 1
GMON querying group 1 at 75 for pid 28, osid 15615
NOTE: Assigning number (1,1) to disk ()
GMON querying group 1 at 76 for pid 28, osid 15615
NOTE: cache closing disk 1 of grp 1: (not open) 
NOTE: cache opening disk 0 of grp 1: DATADG_0000 path:/dev/asm-diskdata01
NOTE: F1X0 found on disk 0 au 2 fcn 0.178802
NOTE: cache opening disk 2 of grp 1: DATA_0001 path:/dev/asm-diskdata02
NOTE: cache mounting (first) normal redundancy group 1/0x9C94D0EB (DATADG)
Tue Mar 07 19:12:45 2017
* allocate domain 1, invalid = TRUE 
kjbdomatt send to inst 2
Tue Mar 07 19:12:45 2017
NOTE: attached to recovery domain 1
NOTE: starting recovery of thread=1 ckpt=25.2870 group=1 (DATADG)
NOTE: advancing ckpt for group 1 (DATADG) thread=1 ckpt=25.2873
NOTE: cache recovered group 1 to fcn 0.365897
NOTE: redo buffer size is 512 blocks (2101760 bytes)
Tue Mar 07 19:12:45 2017
NOTE: LGWR attempting to mount thread 1 for diskgroup 1 (DATADG)
NOTE: LGWR found thread 1 closed at ABA 25.2872
NOTE: LGWR mounted thread 1 for diskgroup 1 (DATADG)
NOTE: LGWR opening thread 1 at fcn 0.365897 ABA 26.2873
NOTE: cache mounting group 1/0x9C94D0EB (DATADG) succeeded
NOTE: cache ending mount (success) of group DATADG number=1 incarn=0x9c94d0eb
Tue Mar 07 19:12:45 2017
NOTE: Instance updated compatible.asm to 11.2.0.0.0 for grp 1
SUCCESS: diskgroup DATADG was mounted
SUCCESS: alter diskgroup DATADG mount
Tue Mar 07 19:12:45 2017
NOTE: diskgroup resource ora.DATADG.dg is online
NOTE: skip COD recovery as part of test at kfrc.c:1639 
NOTE: skip COD recovery as part of test at kfrc.c:1639 
NOTE: skip COD recovery as part of test at kfrc.c:1639 
NOTE: skip COD recovery as part of test at kfrc.c:1639 

asm的问题解决后,然后登录数据库,发现运气比较好,两个数据库正常open成功,而且alert日志无任何报错,直接通过rman备份出来数据,重建asm磁盘组,还原数据,恢复完成,而且实现数据0丢失。