ora-600 kfdpMetaBlk_pickle 故障处理

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:ora-600 kfdpMetaBlk_pickle 故障处理

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

客户反馈集群的crs无法正常启动观察发现是由于gmon进程crash asm实例导致,经过测试确认是在mount data磁盘组的时候会触发给问题

SQL> alter diskgroup data mount;
alter diskgroup data mount
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 7517
Session ID: 918 Serial number: 5

对应的alert日志报ORA-600 [kfdpMetaBlk_pickle:01], [4294967295]错误

SQL> alter diskgroup data mount
NOTE: cache registered group DATA number=2 incarn=0x3078f05f
NOTE: cache began mount (first) of group DATA number=2 incarn=0x3078f05f
NOTE: Assigning number (2,1) to disk (/dev/rdisk/disk93)
NOTE: Assigning number (2,3) to disk (/dev/rdisk/disk96)
NOTE: Assigning number (2,2) to disk (/dev/rdisk/disk94)
NOTE: Assigning number (2,0) to disk (/dev/rdisk/disk92)
Sat Jul 17 05:21:01 2021
Errors in file /u01/app/crs_base/diag/asm/+asm/+ASM2/trace/+ASM2_gmon_7457.trc  (incident=255833):
ORA-00600: internal error code, arguments: [kfdpMetaBlk_pickle:01], [4294967295], [0], [], [], [], [], [], [], [], [], []
Incident details in: /u01/app/crs_base/diag/asm/+asm/+ASM2/incident/incdir_255833/+ASM2_gmon_7457_i255833.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Errors in file /u01/app/crs_base/diag/asm/+asm/+ASM2/trace/+ASM2_gmon_7457.trc:
ORA-00600: internal error code, arguments: [kfdpMetaBlk_pickle:01], [4294967295], [0], [], [], [], [], [], [], [], [], []
GMON (ospid: 7457): terminating the instance due to error 493
Sat Jul 17 05:21:03 2021
System state dump requested by (instance=2, osid=7457 (GMON)), summary=[abnormal instance termination].
System State dumped to trace file /u01/app/crs_base/diag/asm/+asm/+ASM2/trace/+ASM2_diag_7429.trc
Instance terminated by GMON, pid = 7457

对于ORA-600 [kfdpMetaBlk_pickle:01], [4294967295]错误,查询了mos没有任何有效信息
kfdpMetaBlk_pickle


对应的trace文件发现如下信息

2021-07-17 03:51:16.277603*:800002A2:KGF:kgfdputl.c@1411:kgfdpMetaSet_getMaxClique():   inc=2 ver=4294967295
2021-07-17 03:51:16.277619 :800002A3:KFDP:kfdp.c@9314:kfdpMetaSet_filterOld(): filtered old meta on disk 2
2021-07-17 03:51:16.277620 :800002A4:KFDP:kfdp.c@9314:kfdpMetaSet_filterOld(): filtered old meta on disk 2
2021-07-17 03:51:16.277992 :800002A5:KFDP:kfdp.c@9417:kfdpMetaSet_readDta():kfdpMetaSet_readDta unpickle upto 6 metablks
2021-07-17 03:51:16.277993 :800002A6:KFDP:kfdp.c@9425:kfdpMetaSet_readDta():kfdpMetaSet_readDta unpickle metablk for disk 3
2021-07-17 03:51:16.278154 :800002A7:KFDP:kfdp.c@9425:kfdpMetaSet_readDta():kfdpMetaSet_readDta unpickle metablk for disk 1
2021-07-17 03:51:16.278268 :800002A8:KFDP:kfdp.c@5851:kfdp_read(): kfdp_read end ok=1
2021-07-17 03:51:16.278277 :800002A9:KFDP:kfdp.c@7073:kfdp_doQuery(): kfdp_doQuery   rewrite_kfdp=1
2021-07-17 03:51:16.278282 :800002AA:KFDP:kfdp.c@12511:kfdpLckValue_pickle(): kfdpLckValue_pickle size=0 
                            endian=0xff ndisks=0 lckvalid=0
2021-07-17 03:51:16.278293 :800002AB:db_trace:kfdp.c@12803:kfdpLck_convPriv(): [10499:19:396] 
                            kfdpLck_conv: grp=1, type=0, mode=5, line=7155
2021-07-17 03:51:16.278294 :800002AC:KFDP:kfdp.c@12663:kfdpLckValue_unpickle(): kfdpLckValue_unpickle
                            size=28 res=0 ok=0 ver=-1 dcnt=0 lckvalid=0 flags=0x2 inst=0 (I am 2) version=0
2021-07-17 03:51:16.278499*:800002AD:KGF:kgfdputl.c@485:kgfdpDta_getAllDsks(): kgfdpDta_getAllDsks using 
                            saved iterator 0x9ffffffffd571220 with 4 disks
2021-07-17 03:51:16.278688 :800002AE:KFDP:kfdp.c@5566:kfdp_write(): kfdp_write: pstDskCnt=3 grow=0 degenerate=0
2021-07-17 03:51:16.278688*:800002AF:KGF:kgfdputl.c@2619:kgfdpTraceSet(): writing pst to disks (n=3): 0 1 3

通过删除信息,基本上可以确认由于pst信息异常(pst中记录的只有0 1 3三个磁盘,认为2是老磁盘),但是实际磁盘为4个,导致gmon进程异常.通过底层解决该问题,数据库恢复成功

SQL> recover database using backup controlfile;
ORA-00279: change 30075814973 generated at 07/17/2021 01:12:08 needed for
thread 2
ORA-00289: suggestion : +FRA
ORA-00280: change 30075814973 for thread 2 is in sequence #120561


Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/tmp/asm/group_16
ORA-00279: change 30075814973 generated at 07/17/2021 01:11:54 needed for
thread 1
ORA-00289: suggestion :
+FRA/xff/archivelog/2021_07_17/thread_1_seq_79949.1543.1078103529
ORA-00280: change 30075814973 for thread 1 is in sequence #79949


Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/tmp/asm/group_13
ORA-00279: change 30075815013 generated at 07/17/2021 01:12:09 needed for
thread 1
ORA-00289: suggestion : +FRA
ORA-00280: change 30075815013 for thread 1 is in sequence #79950
ORA-00278: log file '/tmp/asm/group_13' no longer needed for this recovery


Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/tmp/asm/group_11
Log applied.
Media recovery complete.

SQL> alter database open resetlogs;

Database altered.

运气不错,对于该故障的恢复,实现数据0丢失.

删除分区 oracle asm disk 恢复

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:删除分区 oracle asm disk 恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

接到一个朋友数据库故障请求case.大概操作是这样的:有一个39T的lun,通过parted分了15个分区,给oracle asm使用创建磁盘组data4,然后分了4个分区做成data5(由于ausize写错误了),删除掉磁盘组和这四个分区.然后重新分配了6个分区,并且使用最后5个分区创建了data5磁盘组.使用了一段时间之后,由于oracle空间不足,检查的时候误以为这个lun就前面15个分区使用,人工把后面的6个分区给删除了,并且创建了4个新分区,然后发现数据库crash了,发现误删除了在使用的分区.然后又把新创建的4个分区给删除了.接手该故障的时候,这个39T lun的分区信息如下

[root@node1 linux64]# parted /dev/mapper/36000d31003d39e000000000000000004
GNU Parted 2.1
Using /dev/mapper/36000d31003d39e000000000000000004
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print                                                            
Model: Linux device-mapper (multipath) (dm)
Disk /dev/mapper/36000d31003d39e000000000000000004: 39.6TB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start   End     Size    File system  Name         Flags
 1      2097kB  2000GB  2000GB               asmdata4-1
 2      2000GB  4000GB  2000GB               asmdata4-2
 3      4000GB  6000GB  2000GB               asmdata4-3
 4      6000GB  8000GB  2000GB               asmdata4-4
 5      8000GB  10.0TB  2000GB               asmdata4-5
 6      10.0TB  12.0TB  2000GB               asmdata4-6
 7      12.0TB  14.0TB  2000GB               asmdata4-7
 8      14.0TB  16.0TB  2000GB               asmdata4-8
 9      16.0TB  18.0TB  2000GB               asmdata4-9
10      18.0TB  20.0TB  2000GB               asmdata4-10
11      20.0TB  22.0TB  2000GB               asmdata4-11
12      22.0TB  24.0TB  2000GB               asmdata4-12
13      24.0TB  26.0TB  2000GB               asmdata4-13
14      26.0TB  28.0TB  2000GB               asmdata4-14
15      28.0TB  30.0TB  2000GB               asmdata4-15

客户正常使用情况下,这个lun上面相关分区的asm disk信息

 SQL> CREATE DISKGROUP DATA4 EXTERNAL REDUNDANCY  DISK 
'/dev/mapper/36000d31003d39e000000000000000004p1' SIZE 1907346M ,
 '/dev/mapper/36000d31003d39e000000000000000004p2' SIZE 1907350M ,
 '/dev/mapper/36000d31003d39e000000000000000004p3' SIZE 1907348M ,
 '/dev/mapper/36000d31003d39e000000000000000004p4' SIZE 1907348M ,
 '/dev/mapper/36000d31003d39e000000000000000004p5' SIZE 1907350M  
ATTRIBUTE 'compatible.asm'='11.2.0.0.0','au_size'='8M' /* ASMCA */ 

SQL> ALTER DISKGROUP DATA4 ADD  DISK 
'/dev/mapper/36000d31003d39e000000000000000004p10' SIZE 1907348M ,
'/dev/mapper/36000d31003d39e000000000000000004p6' SIZE 1907348M ,
'/dev/mapper/36000d31003d39e000000000000000004p7' SIZE 1907348M ,
'/dev/mapper/36000d31003d39e000000000000000004p8' SIZE 1907350M ,
'/dev/mapper/36000d31003d39e000000000000000004p9' SIZE 1907348M /* ASMCA */ 

SQL> ALTER DISKGROUP DATA4 ADD  DISK 
'/dev/mapper/36000d31003d39e000000000000000004p11' SIZE 1907348M ,
'/dev/mapper/36000d31003d39e000000000000000004p12' SIZE 1907350M ,
'/dev/mapper/36000d31003d39e000000000000000004p13' SIZE 1907348M ,
'/dev/mapper/36000d31003d39e000000000000000004p14' SIZE 1907348M ,
'/dev/mapper/36000d31003d39e000000000000000004p15' SIZE 1907350M /* ASMCA */ 

SQL> CREATE DISKGROUP DATA5 EXTERNAL REDUNDANCY  DISK 
'/dev/mapper/36000d31003d39e000000000000000004p17' SIZE 1716614M ,
'/dev/mapper/36000d31003d39e000000000000000004p18' SIZE 1716614M ,
'/dev/mapper/36000d31003d39e000000000000000004p19' SIZE 1716614M ,
'/dev/mapper/36000d31003d39e000000000000000004p20' SIZE 1716614M ,
'/dev/mapper/36000d31003d39e000000000000000004p21' SIZE 1621246M  
ATTRIBUTE 'compatible.asm'='11.2.0.0.0','au_size'='4M' /* ASMCA */ 

基于客户现在的情况,data4中的所有分区都正常,主要是要找出来data5中的5个分区的数据.因为客户不确定p16分区大小,导致后续的5个分区起始位置不好定位.从而使得恢复无法进行.通过shell脚本结合kfed尝试定位asm disk header信息

#!/bin/bash
j=xxxxxxxxxxx
for ((i=xxxxxx; i<=j; i++))
do
 echo "-----$i--------" >> /home/get_au1.txt
 kfed read /dev/mapper/36000d31003d39e000000000000000004 aun=$i |
 > grep  "kfdhdb.dskname:              DATA" >> /home/get_au.txt
done

结果发现无法获取到结果,通过分析发现这里由于lun过大,导致aun值过大,从而使得kfed溢出无法读取到正常值.根据parted的特性,人工dd部分block进行分析

[root@node1 bak]# kfed read xifenfei.dd aun=134|more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  3357988283 ; 0x00c: 0xc826d5bb
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:         ORCLDISK ; 0x000: length=8
kfdhdb.driver.reserved[0]:            0 ; 0x008: 0x00000000
kfdhdb.driver.reserved[1]:            0 ; 0x00c: 0x00000000
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                186646528 ; 0x020: 0x0b200000
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        1 ; 0x026: KFDGTP_EXTERNAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:              DATA5_0000 ; 0x028: length=10
kfdhdb.grpname:                   DATA5 ; 0x048: length=5
kfdhdb.fgname:               DATA5_0000 ; 0x068: length=10
kfdhdb.capname:                         ; 0x088: length=0
kfdhdb.crestmp.hi:             33116450 ; 0x0a8: HOUR=0x2 DAYS=0x9 MNTH=0x4 YEAR=0x7e5
kfdhdb.crestmp.lo:            477378560 ; 0x0ac: USEC=0x0 MSEC=0x10e SECS=0x7 MINS=0x7
kfdhdb.mntstmp.hi:             33116450 ; 0x0b0: HOUR=0x2 DAYS=0x9 MNTH=0x4 YEAR=0x7e5
kfdhdb.mntstmp.lo:            486256640 ; 0x0b4: USEC=0x0 MSEC=0x2ec SECS=0xf MINS=0x7
kfdhdb.secsize:                     512 ; 0x0b8: 0x0200
kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000
kfdhdb.ausize:                  4194304 ; 0x0bc: 0x00400000
kfdhdb.mfact:                    454272 ; 0x0c0: 0x0006ee80
kfdhdb.dsksize:                  429153 ; 0x0c4: 0x00068c61
kfdhdb.pmcnt:                         2 ; 0x0c8: 0x00000002

顺利找到了data5中的第一块磁盘,而且确定了起始位置,然后构造相关的dd语句把分区的数据dd到一个新磁盘中

dd if=/dev/mapper/36000d31003d39e000000000000000004 bs=4M skip=xxxxx count=429153 of=/dev/sdu

然后通过kfed查看数据
20210704160347


通过类似方法依次处理,最终把5块asm disk全部找到,并且顺利dd到新的磁盘中.尝试启动crs,并mount data5
20210704153634

20210704153548

data5 磁盘组mount成功之后,数据库顺利启动,实现lun中删除分区之后,asm磁盘组数据完美恢复
20210704153728

这次运气还不错,仅仅是对lun的分区使用了parted进行了删除和创建等操作,没有格式化文件系统和做成新的asm disk,不然数据会有一部分丢失.对于有部分破坏的分区,需要通过底层碎片的方法进行最大限度抢救数据.参考类似文档:
asm disk被加入vg恢复
又一例asm格式化文件系统恢复
文件系统损坏导致数据文件异常恢复
一次完美的asm disk被格式化ntfs恢复
Oracle 数据文件大小为0kb或者文件丢失恢复
再一起asm disk被格式化成ext3文件系统故障恢复
oracle asm disk格式化恢复—格式化为ext4文件系统
oracle asm disk格式化恢复—格式化为ntfs文件系统
分享oracleasm createdisk重新创建asm disk后数据0丢失恢复案例

drop tablesapce 数据恢复

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:drop tablesapce 数据恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

有客户执行了drop tablespace xxx including contents and datafiles,虽然删除命令返回错误,但是该表空间中大量对象被删除,文件没有被从系统层面删除

Tue Jun 29 14:38:26 2021
DROP TABLESPACE "XFF" INCLUDING CONTENTS AND DATAFILES CASCADE CONSTRAINTS
Tue Jun 29 14:38:32 2021
Thread 1 advanced to log sequence 975 (LGWR switch)
  Current log# 3 seq# 975 mem# 0: D:\APP\ADMINISTRATOR\ORADATA\XFF\REDO03.LOG
ORA-604 signalled during: DROP TABLESPACE "XFF" INCLUDING CONTENTS AND DATAFILES CASCADE CONSTRAINTS...
Tue Jun 29 14:40:44 2021
ALTER DATABASE DATAFILE 'D:\APP\ADMINISTRATOR\ORADATA\XFF\XFF'  AUTOEXTEND ON NEXT 50M
Completed: ALTER DATABASE DATAFILE 'D:\APP\ADMINISTRATOR\ORADATA\XFF\XFF'  AUTOEXTEND ON NEXT 50M
ALTER TABLESPACE "XFF" DROP DATAFILE  'D:\APP\ADMINISTRATOR\ORADATA\XFF\XFF02.DBF'
WARNING: Cannot delete file D:\APP\ADMINISTRATOR\ORADATA\XFF\XFF02.DBF
Errors in file d:\app\administrator\diag\rdbms\XFF\XFF\trace\XFF_ora_2528.trc:
ORA-01265: 无法删除 DATA D:\APP\ADMINISTRATOR\ORADATA\XFF\XFF02.DBF
ORA-27056: 无法删除文件
OSD-04024: 无法删除文件。
O/S-Error: (OS 32) 另一个程序正在使用此文件,进程无法访问。
Completed: ALTER TABLESPACE "XFF" DROP DATAFILE  'D:\APP\ADMINISTRATOR\ORADATA\XFF\XFF02.DBF'
Tue Jun 29 14:58:51 2021
ALTER DATABASE DATAFILE 'D:\APP\ADMINISTRATOR\ORADATA\XFF\XFF'  AUTOEXTEND ON NEXT 50M MAXSIZE UNLIMITED
Completed: ALTER DATABASE DATAFILE 'D:\APP\ADMINISTRATOR\ORADATA\XFF\XFF'  AUTOEXTEND ON NEXT 50M MAXSIZE UNLIMITED

通过工具获取obj$字典信息,结合user$,判断业务用户表数据基本上全部删除
20210701193502


对于这类情况,常规方法无法恢复,只能按照表被drop的思路进行恢复.参考文章:dul恢复drop表测试,通过结合客户提供的表结构和一些特殊方法恢复出来所有对象的obj#,dataobj#信息,快速的恢复了客户数据,得到客户认可
20210701194153