ORA-04020导致adg异常

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:ORA-04020导致adg异常

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

今日早上有客户反馈adg停止同步了,通过检查alert日志发现

Tue Dec 24 18:17:41 2019
Media Recovery Waiting for thread 1 sequence 56655 (in transit)
Recovery of Online Redo Log: Thread 1 Group 11 Seq 56655 Reading mem 0
  Mem# 0: Y:\ORACLE\ORADATA\ORACLE11\STD_REDO11.LOG
Archived Log entry 56248 added for thread 1 sequence 56654 ID 0x5b6bcf9b dest 1:
Tue Dec 24 18:18:11 2019
Errors in file C:\APP\ADMINISTRATOR\diag\rdbms\oracle11dg\oracle11\trace\oracle11_lgwr_3252.trc:
ORA-04020: deadlock detected while trying to lock object SYS.orcl
LGWR (ospid: 3252): terminating the instance due to error 4020
Tue Dec 24 18:18:11 2019
System state dump requested by (instance=1, osid=3252 (LGWR)), summary=[abnormal instance termination].
System State dumped to trace file C:\APP\ADMINISTRATOR\diag\rdbms\oracle11dg\oracle11\trace\oracle11_diag_3236_20191224181811.trc
Dumping diagnostic data in directory=[cdmp_20191224181811], requested by (instance=1, osid=3252 (LGWR)), summary=[abnormal instance termination].
Instance terminated by LGWR, pid = 3252

由于lgwr进程遭遇ORA-04020,从而使得lgwr进程异常,进而整个数据库crash.

分析trace文件

Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Windows NT Version V6.1 Service Pack 1 
CPU                 : 8 - type 8664, 8 Physical Cores
Process Affinity    : 0x0x0000000000000000
Memory (Avail/Total): Ph:8395M/32733M, Ph+PgF:41002M/65464M 
Instance name: oracle11
Redo thread mounted by this instance: 1
Oracle process number: 10
Windows thread id: 3252, image: ORACLE.EXE (LGWR)


*** 2019-12-24 18:18:11.072
*** SESSION ID:(384.1) 2019-12-24 18:18:11.072
*** CLIENT ID:() 2019-12-24 18:18:11.072
*** SERVICE NAME:(SYS$BACKGROUND) 2019-12-24 18:18:11.072
*** MODULE NAME:() 2019-12-24 18:18:11.072
*** ACTION NAME:() 2019-12-24 18:18:11.072
 
A deadlock among DDL and parse locks is detected.
This deadlock is usually due to user errors in
the design of an application or from issuing a set
of concurrent statements which can cause a deadlock.
This should not be reported to Oracle Support.
The following information may aid in finding
the errors which cause the deadlock:
ORA-04020: deadlock detected while trying to lock object SYS.orcl
--------------------------------------------------------
  object   waiting  waiting       blocking blocking
  handle   session     lock mode   session     lock mode
--------  -------- -------- ----  -------- -------- ----
0000000676C20F08  000000066D22BE10 00000006738AB970    X  000000066D22BE10 00000006738A04B0    S
0000000677DF2E80  00000006792E2880 0000000673B13AE8    X  000000066D22BE10 00000006738A19B8    S
--------------------------------------------------------
---------- DUMP OF WAITING AND BLOCKING LOCKS ----------
--------------------------------------------------------
------------- WAITING LOCK -------------
----------------------------------------
SO: 0x00000006738AB970, type: 78, owner: 0x000000065D440498, flag: INIT/-/-/0x00 if: 0x3 c: 0x3
 proc=0x000000066CDE7AD0, name=LIBRARY OBJECT LOCK, file=kgl.h LINE:8751, pg=0

LibraryObjectLock:  Address=00000006738AB970 Handle=0000000676C20F08 RequestMode=X CanBeBrokenCount=2 Incarnation=3 ExecutionCount=0  
………………
SO: 0x00000006738A19B8, type: 78, owner: 0x000000065A38D6C0, flag: INIT/-/-/0x00 if: 0x3 c: 0x3
 proc=0x000000066CDE7AD0, name=LIBRARY OBJECT LOCK, file=kgl.h LINE:8751, pg=0

LibraryObjectLock:  Address=00000006738A19B8 Handle=0000000677DF2E80 Mode=S CanBeBrokenCount=1 Incarnation=1 ExecutionCount=0   
  
  User=000000066D22BE10 Session=000000066D22BE10 ReferenceCount=1 Flags=CNB/[0001] SavepointNum=1b749 
  LibraryHandle:  Address=0000000677DF2E80 Hash=675351da LockMode=S PinMode=0 LoadLockMode=0 Status=0 
  ObjectName:  Name=SYS.orcl   
    FullHashValue=285b654fe3f440652c403c98675351da Namespace=DBINSTANCE(74) Type=CURSOR(00) Identifier=1 OwnerIdn=0 
  Statistics:  InvalidationCount=0 ExecutionCount=0 LoadCount=0 ActiveLocks=1 TotalLockCount=74719 TotalPinCount=0 
  Counters:  BrokenCount=1 RevocablePointer=1 KeepDependency=0 Version=0 BucketInUse=2 HandleInUse=2 HandleReferenceCount=0 
  Concurrency:  DependencyMutex=0000000677DF2F30(0, 0, 0, 0) Mutex=0000000677DF2FC0(0, 149471, 1, 0) 
  Flags=RON/PIN/KEP/BSO/[00810003] 
  WaitersLists:  
    Lock=0000000677DF2F10[0000000673B13B58,000000067382E2F0] 
    Pin=0000000677DF2EF0[0000000677DF2EF0,0000000677DF2EF0] 
    LoadLock=0000000677DF2F68[0000000677DF2F68,0000000677DF2F68] 
  Timestamp:  
  HandleReference:  Address=0000000677DF3030 Handle=0000000000000000 Flags=[00] ---------------------------------
This lock request was aborted.
error 4020 detected in background process
ORA-04020: deadlock detected while trying to lock object SYS.orcl
kjzduptcctx: Notifying DIAG for crash event
----- Abridged Call Stack Trace -----
ksedsts()+585<-kjzdssdmp()+329<-kjzduptcctx()+288<-kjzdicrshnfy()+99<-ksuitm()+1525<-ksbrdp()+4578<-opirip()
+853<-opidrv()+906<-sou2o()+98<-opimai_real()+280<-opimai()+191<-BackgroundThreadStart()+646<-0000000076CF59CD<-0000000076E2A561 
----- End of Abridged Call Stack Trace -----

*** 2019-12-24 18:18:11.165
LGWR (ospid: 3252): terminating the instance due to error 4020

*** 2019-12-24 18:18:17.483
ksuitm: waiting up to [5] seconds before killing DIAG(3236)

 

日志显示由于lgwr进程等待LIBRARY OBJECT LOCK超时,从而引起异常,根据经验此类问题一般是由于bug导致,查询mos发现匹配bug信息Bug 18515268 ORA-4020 in ADG Standby Database causing instance crash by LGWR
20191225112622


可以根据需要打上相关Patch 18515268: ACTIVE DATAGUARD STANDBY CRASHES DUE TO AN ORA-4020 ENCOUNTERED BY LGWR

xfs删除数据文件恢复

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:xfs删除数据文件恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

在linux7的环境(xfs文件系统格式)中,由于误操作执行了rm -rf /操作,导致系统大部分文件被删除(oracle数据库相关文件).我们对其现场进行分析,确认相关数据在磁盘底层依旧存在
20191219042943


20191219043052


运气不错通过底层恢复,实现数据库完美open
20191219221348

fuser命令缺失导致19c打补丁失败

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:fuser命令缺失导致19c打补丁失败

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

Oracle 19.3数据库打最新补丁报错

[oracle@www.xifenfei.com 30125133]$ opatch apply ./
Oracle Interim Patch Installer version 12.2.0.1.18
Copyright (c) 2019, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/19.0/db_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/19.0/db_1/oraInst.loc
OPatch version    : 12.2.0.1.18
OUI version       : 12.2.0.7.0
Log file location : /u01/app/oracle/product/19.0/db_1/cfgtoollogs/opatch/opatch2019-12-16_08-00-53AM_1.log

分析日志

Verifying environment and performing prerequisite checks...
Prerequisite check "CheckSystemCommandAvailable" failed.
The details are:
Missing command :fuser
UtilSession failed: 
Prerequisite check "CheckSystemCommandAvailable" failed.
Log file location: /u01/app/oracle/product/19.0/db_1/cfgtoollogs/opatch/opatch2019-12-16_08-00-53AM_1.log

OPatch failed with error code 73


[Dec 16, 2019 8:00:57 AM] [INFO]    Following patches can be applied:  30125133
[Dec 16, 2019 8:00:57 AM] [INFO]    Following patches are not required:
[Dec 16, 2019 8:00:57 AM] [INFO]    Following patches are auto rollbackable:
[Dec 16, 2019 8:00:57 AM] [INFO]    Finished checking prereq checkConflictAgainstOHWithDetail
[Dec 16, 2019 8:00:58 AM] [INFO]    Running prerequisite checks...
[Dec 16, 2019 8:00:58 AM] [INFO]    Space Needed : 3052.647MB
[Dec 16, 2019 8:00:58 AM] [INFO]    Missing command :fuser
[Dec 16, 2019 8:00:58 AM] [INFO]    Prerequisite check "CheckSystemCommandAvailable" failed.
                                    The details are:
                                    Missing command :fuser
[Dec 16, 2019 8:00:58 AM] [SEVERE]  OUI-67073:UtilSession failed:
                                    Prerequisite check "CheckSystemCommandAvailable" failed.

基本上可以确定是由于无fuser命令导致CheckSystemCommandAvailable失败.处理办法为安装上相关程序

[root@www.xifenfei.com tmp]# yum search fuser
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
==================================== Matched: fuser ===========================
psmisc.x86_64 : Utilities for managing processes on your system
[root@www.xifenfei.com tmp]# 


[root@www.xifenfei.com tmp]# mount /dev/cdrom /media/ -o loop
[root@www.xifenfei.com tmp]# yum install -y psmisc
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
rhel-yum                       | 4.3 kB  00:00:00     
Resolving Dependencies
--> Running transaction check
---> Package psmisc.x86_64 0:22.20-15.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================================
 Package                    Arch              Version                    Repository     Size
=============================================================================================
Installing:
 psmisc                     x86_64            22.20-15.el7               rhel-yum      141 k

Transaction Summary
=============================================================================================
Install  1 Package

Total download size: 141 k
Installed size: 475 k
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : psmisc-22.20-15.el7.x86_64          1/1 
  Verifying  : psmisc-22.20-15.el7.x86_64          1/1 

Installed:
  psmisc.x86_64 0:22.20-15.el7     

Complete!

重新打补丁

[oracle@www.xifenfei.com 30125133]$ opatch apply ./
Oracle Interim Patch Installer version 12.2.0.1.18
Copyright (c) 2019, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/19.0/db_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/19.0/db_1/oraInst.loc
OPatch version    : 12.2.0.1.18
OUI version       : 12.2.0.7.0
Log file location : /u01/app/oracle/product/19.0/db_1/cfgtoollogs/opatch/opatch2019-12-16_08-08-44AM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   30125133  

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/19.0/db_1')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '30125133' to OH '/u01/app/oracle/product/19.0/db_1'
ApplySession: Optional component(s) [ oracle.network.gsm, 19.0.0.0.0 ],[ oracle.rdbms.ic, 19.0.0.0.0 ],
[ oracle.network.cman, 19.0.0.0.0 ],[ oracle.net.cman, 19.0.0.0.0 ],[ oracle.options.olap.awm, 19.0.0.0.0],
[ oracle.oraolap.mgmt, 19.0.0.0.0 ],[ oracle.assistants.usm, 19.0.0.0.0 ],[ oracle.assistants.asm, 19.0.0.0.0],
[ oracle.tfa, 19.0.0.0.0 ]  not present in the Oracle Home or a higher version is found.

Patching component oracle.rdbms, 19.0.0.0.0...

Patching component oracle.rdbms.rsf, 19.0.0.0.0...

Patching component oracle.assistants.acf, 19.0.0.0.0...

Patching component oracle.assistants.deconfig, 19.0.0.0.0...

Patching component oracle.assistants.server, 19.0.0.0.0...

Patching component oracle.buildtools.rsf, 19.0.0.0.0...

Patching component oracle.ctx, 19.0.0.0.0...

Patching component oracle.ldap.rsf, 19.0.0.0.0...

Patching component oracle.network.rsf, 19.0.0.0.0...

Patching component oracle.rdbms.dbscripts, 19.0.0.0.0...

Patching component oracle.sdo, 19.0.0.0.0...

Patching component oracle.sqlplus, 19.0.0.0.0...

Patching component oracle.ldap.rsf.ic, 19.0.0.0.0...

Patching component oracle.rdbms.rman, 19.0.0.0.0...

Patching component oracle.ctx.atg, 19.0.0.0.0...

Patching component oracle.rdbms.oci, 19.0.0.0.0...

Patching component oracle.rdbms.util, 19.0.0.0.0...

Patching component oracle.xdk, 19.0.0.0.0...

Patching component oracle.ovm, 19.0.0.0.0...

Patching component oracle.network.listener, 19.0.0.0.0...

Patching component oracle.rdbms.install.plugins, 19.0.0.0.0...

Patching component oracle.dbjava.jdbc, 19.0.0.0.0...

Patching component oracle.dbdev, 19.0.0.0.0...

Patching component oracle.rdbms.deconfig, 19.0.0.0.0...

Patching component oracle.nlsrtl.rsf, 19.0.0.0.0...

Patching component oracle.oraolap.dbscripts, 19.0.0.0.0...

Patching component oracle.install.deinstalltool, 19.0.0.0.0...

Patching component oracle.dbjava.ic, 19.0.0.0.0...

Patching component oracle.sdo.locator, 19.0.0.0.0...

Patching component oracle.rdbms.scheduler, 19.0.0.0.0...

Patching component oracle.rdbms.dv, 19.0.0.0.0...

Patching component oracle.ons, 19.0.0.0.0...

Patching component oracle.ldap.security.osdt, 19.0.0.0.0...

Patching component oracle.ctx.rsf, 19.0.0.0.0...

Patching component oracle.duma, 19.0.0.0.0...

Patching component oracle.ldap.owm, 19.0.0.0.0...

Patching component oracle.oracore.rsf, 19.0.0.0.0...

Patching component oracle.rdbms.install.seeddb, 19.0.0.0.0...

Patching component oracle.odbc, 19.0.0.0.0...

Patching component oracle.sdo.locator.jrf, 19.0.0.0.0...

Patching component oracle.network.client, 19.0.0.0.0...

Patching component oracle.sqlplus.ic, 19.0.0.0.0...

Patching component oracle.dbjava.ucp, 19.0.0.0.0...

Patching component oracle.xdk.rsf, 19.0.0.0.0...

Patching component oracle.marvel, 19.0.0.0.0...

Patching component oracle.xdk.parser.java, 19.0.0.0.0...

Patching component oracle.rdbms.rsf.ic, 19.0.0.0.0...

Patching component oracle.nlsrtl.rsf.core, 19.0.0.0.0...

Patching component oracle.precomp.common, 19.0.0.0.0...

Patching component oracle.precomp.lang, 19.0.0.0.0...
Patch 30125133 successfully applied.
Sub-set patch [29517242] has become inactive due to the application of a super-set patch [30125133].
Please refer to Doc ID 2161861.1 for any possible further required actions.
Log file location: /u01/app/oracle/product/19.0/db_1/cfgtoollogs/opatch/opatch2019-12-16_08-08-44AM_1.log

OPatch succeeded.


[oracle@www.xifenfei.com 30128191]$ opatch apply ./
Oracle Interim Patch Installer version 12.2.0.1.18
Copyright (c) 2019, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/19.0/db_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/19.0/db_1/oraInst.loc
OPatch version    : 12.2.0.1.18
OUI version       : 12.2.0.7.0
Log file location : /u01/app/oracle/product/19.0/db_1/cfgtoollogs/opatch/opatch2019-12-16_08-12-44AM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   30128191  

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/19.0/db_1')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '30128191' to OH '/u01/app/oracle/product/19.0/db_1'

Patching component oracle.javavm.server, 19.0.0.0.0...

Patching component oracle.javavm.server.core, 19.0.0.0.0...

Patching component oracle.rdbms.dbscripts, 19.0.0.0.0...

Patching component oracle.rdbms, 19.0.0.0.0...
Patch 30128191 successfully applied.
Log file location: /u01/app/oracle/product/19.0/db_1/cfgtoollogs/opatch/opatch2019-12-16_08-12-44AM_1.log

OPatch succeeded.

确认补丁安装成功

[oracle@www.xifenfei.com 30128191]$ opatch lspatches
30128191;OJVM RELEASE UPDATE: 19.5.0.0.191015 (30128191)
30125133;Database Release Update : 19.5.0.0.191015 (30125133)
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)

OPatch succeeded.

mos上也有类似文章可参考:Prerequisite check “CheckSystemCommandAvailable” failed (Doc ID 1581604.1)

dbca删除库和rm删库恢复

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞 标题:dbca删除库和rm删库恢复 作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]
由于误操作删除库的事情偶有发生,遇到过dbca误删除数据库,遇到过rm删除数据文件的故障.近期有过两次类似的恢复请求,一次是win平台dbca本该删除测试库,结果选择错误把生产库给删除,发生误操作之后,没有及时对磁盘进行卸载,而是有一些写操作,当事人找了各种反删除软件进行恢复,结果只是成功了恢复了部分文件,核心文件全部丢失.另外一次是linux平台误操作,rm -rf 删除了整个oracle目录,而且数据文件也全部在里面,所幸没有任何写操作. 对于dbca客户已经使用反删除软件进行恢复,但是效果不行,我们直接从底层扫描,恢复出来需要数据 SCAN-DISK-WIN
有个别文件少量block被覆盖,而且客户只要其中部分核心表数据,因此直接使用dul恢复出客户需要数据,完美完成恢复 对于rm掉所有数据文件的客户,通过extundelete进行反删除恢复(参考:extundelete恢复Linux被删除文件),由于句柄丢失,导致有几个数据文件恢复失败.通过linux平台底层恢复,实现需要数据文件完全恢复,数据库正常open,实现完美恢复 SCAN-DISK
20191206142008 对于这类删库的操作,一定要保护好现场,对数据文件存在分区不要有二次写操作,尽可能的减少覆盖.理论上数据文件在磁盘上,都可以恢复出来.如果有遇到数据库文件误删除,或者文件系统损坏的恢复需求,无法自行解决,可以联系我们从底层进行恢复:Phone:17813235971    Q Q:107644445QQ咨询惜分飞    E-Mail:dba@xifenfei.com 类似恢复案例: 又一例asm格式化文件系统恢复 文件系统损坏导致数据文件异常恢复 Oracle 数据文件大小为0kb或者文件丢失恢复 oracle asm disk格式化恢复—格式化为ntfs文件系统 oracle asm disk格式化恢复—格式化为ext4文件系统 ORA-15042: ASM disk “N” is missing from group number “M” 故障恢复

ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh]故障处理

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh]故障处理

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

客户对asm进行扩容,由于配置不恰当,在使用asmca增加asm disk的时候直接选中了已经被用作文件系统的vg中的磁盘

Tue Nov 19 09:48:48 2019
Non critical error ORA-48180 cFri Nov 22 12:47:48 2019
SQL> ALTER DISKGROUP XIFENFEI ADD  DISK '/dev/rhdisk29' SIZE 491520M ,
'/dev/rhdisk30' SIZE 491520M ,
'/dev/rhdisk31' SIZE 491520M /* ASMCA */ 
NOTE: GroupBlock outside rolling migration privileged region
NOTE: Assigning number (4,15) to disk (/dev/rhdisk29)
NOTE: Assigning number (4,16) to disk (/dev/rhdisk30)
NOTE: Assigning number (4,17) to disk (/dev/rhdisk31)
NOTE: requesting all-instance membership refresh for group=4
NOTE: initializing header on grp 4 disk XIFENFEI_0015
NOTE: initializing header on grp 4 disk XIFENFEI_0016
NOTE: initializing header on grp 4 disk XIFENFEI_0017
NOTE: requesting all-instance disk validation for group=4
Fri Nov 22 12:47:51 2019
NOTE: skipping rediscovery for group 4/0xb08c40b (XIFENFEI) on local instance.
NOTE: requesting all-instance disk validation for group=4
NOTE: skipping rediscovery for group 4/0xb08c40b (XIFENFEI) on local instance.
Fri Nov 22 12:47:59 2019
NOTE: initiating PST update: grp = 4
Fri Nov 22 12:47:59 2019
GMON updating group 4 at 12 for pid 27, osid 12649908
NOTE: PST update grp = 4 completed successfully 
NOTE: membership refresh pending for group 4/0xb08c40b (XIFENFEI)
GMON querying group 4 at 13 for pid 18, osid 39912680
Fri Nov 22 12:48:01 2019
NOTE: cache opening disk 15 of grp 4: XIFENFEI_0015 path:/dev/rhdisk29
NOTE: cache opening disk 16 of grp 4: XIFENFEI_0016 path:/dev/rhdisk30
NOTE: cache opening disk 17 of grp 4: XIFENFEI_0017 path:/dev/rhdisk31
NOTE: Attempting voting file refresh on diskgroup XIFENFEI
NOTE: Refresh completed on diskgroup XIFENFEI. No voting file found.
GMON querying group 4 at 14 for pid 18, osid 39912680
SUCCESS: refreshed membership for 4/0xb08c40b (XIFENFEI)
SUCCESS: ALTER DISKGROUP XIFENFEI ADD  DISK '/dev/rhdisk29' SIZE 491520M ,
'/dev/rhdisk30' SIZE 491520M ,
'/dev/rhdisk31' SIZE 491520M /* ASMCA */

发现增加错磁盘之后,从vg里面强制踢掉被asm使用的磁盘,并且尝试在asm中删除这些磁盘,并加入新磁盘

Fri Nov 22 12:52:03 2019
SQL> ALTER DISKGROUP XIFENFEI DROP  DISK 'XIFENFEI_0015','XIFENFEI_0016','XIFENFEI_0017' /* ASMCA */ 
NOTE: GroupBlock outside rolling migration privileged region
Fri Nov 22 12:52:03 2019
NOTE: stopping process ARB0
NOTE: rebalance interrupted for group 4/0xb08c40b (XIFENFEI)
NOTE: requesting all-instance membership refresh for group=4
NOTE: membership refresh pending for group 4/0xb08c40b (XIFENFEI)
Fri Nov 22 12:52:12 2019
GMON querying group 4 at 15 for pid 18, osid 39912680
SUCCESS: refreshed membership for 4/0xb08c40b (XIFENFEI)
SUCCESS: ALTER DISKGROUP XIFENFEI DROP  DISK 'XIFENFEI_0015','XIFENFEI_0016','XIFENFEI_0017' /* ASMCA */
NOTE: starting rebalance of group 4/0xb08c40b (XIFENFEI) at power 1
Starting background process ARB0
…………
Fri Nov 22 12:58:26 2019
SQL> ALTER DISKGROUP XIFENFEI ADD  DISK '/dev/rhdisk7' SIZE 491520M /* ASMCA */ 
NOTE: GroupBlock outside rolling migration privileged region
Fri Nov 22 12:58:26 2019
NOTE: stopping process ARB0
NOTE: rebalance interrupted for group 4/0xb08c40b (XIFENFEI)
NOTE: ASM did background COD recovery for group 4/0xb08c40b (XIFENFEI)
NOTE: Assigning number (4,18) to disk (/dev/rhdisk7)
NOTE: requesting all-instance membership refresh for group=4
NOTE: initializing header on grp 4 disk XIFENFEI_0018
NOTE: requesting all-instance disk validation for group=4
NOTE: skipping rediscovery for group 4/0xb08c40b (XIFENFEI) on local instance.
NOTE: requesting all-instance disk validation for group=4
NOTE: skipping rediscovery for group 4/0xb08c40b (XIFENFEI) on local instance.
Fri Nov 22 12:58:41 2019
NOTE: initiating PST update: grp = 4
Fri Nov 22 12:58:41 2019
GMON updating group 4 at 16 for pid 27, osid 12649908
NOTE: PST update grp = 4 completed successfully 
Fri Nov 22 12:58:41 2019
NOTE: membership refresh pending for group 4/0xb08c40b (XIFENFEI)
GMON querying group 4 at 17 for pid 18, osid 39912680
NOTE: cache opening disk 18 of grp 4: XIFENFEI_0018 path:/dev/rhdisk7
NOTE: Attempting voting file refresh on diskgroup XIFENFEI
NOTE: Refresh completed on diskgroup XIFENFEI. No voting file found.
GMON querying group 4 at 18 for pid 18, osid 39912680
SUCCESS: refreshed membership for 4/0xb08c40b (XIFENFEI)
NOTE: starting rebalance of group 4/0xb08c40b (XIFENFEI) at power 1
SUCCESS: ALTER DISKGROUP XIFENFEI ADD  DISK '/dev/rhdisk7' SIZE 491520M /* ASMCA */
Starting background process ARB0
Fri Nov 22 12:58:46 2019
ARB0 started with pid=44, OS id=54460432 
…………
Fri Nov 22 12:59:57 2019
SQL> ALTER DISKGROUP XIFENFEI ADD  DISK '/dev/rhdisk10' SIZE 491520M ,
'/dev/rhdisk11' SIZE 491520M ,
'/dev/rhdisk8' SIZE 491520M ,
'/dev/rhdisk9' SIZE 491520M /* ASMCA */ 
NOTE: GroupBlock outside rolling migration privileged region
Fri Nov 22 12:59:57 2019
NOTE: stopping process ARB0
NOTE: rebalance interrupted for group 4/0xb08c40b (XIFENFEI)
NOTE: ASM did background COD recovery for group 4/0xb08c40b (XIFENFEI)
NOTE: Assigning number (4,19) to disk (/dev/rhdisk10)
NOTE: Assigning number (4,20) to disk (/dev/rhdisk11)
NOTE: Assigning number (4,21) to disk (/dev/rhdisk8)
NOTE: Assigning number (4,22) to disk (/dev/rhdisk9)
NOTE: requesting all-instance membership refresh for group=4
NOTE: initializing header on grp 4 disk XIFENFEI_0019
NOTE: initializing header on grp 4 disk XIFENFEI_0020
NOTE: initializing header on grp 4 disk XIFENFEI_0021
NOTE: initializing header on grp 4 disk XIFENFEI_0022
NOTE: requesting all-instance disk validation for group=4
NOTE: skipping rediscovery for group 4/0xb08c40b (XIFENFEI) on local instance.
Fri Nov 22 13:00:08 2019
NOTE: requesting all-instance disk validation for group=4
Fri Nov 22 13:00:08 2019
NOTE: skipping rediscovery for group 4/0xb08c40b (XIFENFEI) on local instance.
NOTE: initiating PST update: grp = 4
Fri Nov 22 13:00:13 2019
GMON updating group 4 at 19 for pid 27, osid 12649908
NOTE: PST update grp = 4 completed successfully 
NOTE: membership refresh pending for group 4/0xb08c40b (XIFENFEI)
GMON querying group 4 at 20 for pid 18, osid 39912680
NOTE: cache opening disk 19 of grp 4: XIFENFEI_0019 path:/dev/rhdisk10
NOTE: cache opening disk 20 of grp 4: XIFENFEI_0020 path:/dev/rhdisk11
NOTE: cache opening disk 21 of grp 4: XIFENFEI_0021 path:/dev/rhdisk8
NOTE: cache opening disk 22 of grp 4: XIFENFEI_0022 path:/dev/rhdisk9
NOTE: Attempting voting file refresh on diskgroup XIFENFEI
NOTE: Refresh completed on diskgroup XIFENFEI. No voting file found.
GMON querying group 4 at 21 for pid 18, osid 39912680
SUCCESS: refreshed membership for 4/0xb08c40b (XIFENFEI)
SUCCESS: ALTER DISKGROUP XIFENFEI ADD  DISK '/dev/rhdisk10' SIZE 491520M ,
'/dev/rhdisk11' SIZE 491520M ,
'/dev/rhdisk8' SIZE 491520M ,
'/dev/rhdisk9' SIZE 491520M /* ASMCA */
NOTE: starting rebalance of group 4/0xb08c40b (XIFENFEI) at power 1
Starting background process ARB0

asm在做着reblance的过程中遭遇到坏块,直接导致磁盘组dismount

Sun Nov 24 04:42:27 2019
NOTE: group 4 PST updated.
WARNING: cache read  a corrupt block: group=4(XIFENFEI) dsk=15 blk=258 disk=15
 (XIFENFEI_0015) incarn=1717056824 au=113792 blk=2 count=254
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM2/trace/+ASM2_x000_28639240.trc:
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483663] [258] [56 != 0]
NOTE: a corrupted block from group XIFENFEI was dumped to
   /u01/app/oracle/diag/asm/+asm/+ASM2/trace/+ASM2_x000_28639240.trc
WARNING: cache read (retry) a corrupt block: group=4(XIFENFEI) 
 dsk=15 blk=258 disk=15 (XIFENFEI_0015) incarn=1717056824 au=113792 blk=2 count=1
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM2/trace/+ASM2_x000_28639240.trc:
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483663] [258] [56 != 0]
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483663] [258] [56 != 0]
ERROR: cache failed to read group=4(XIFENFEI) dsk=15 blk=258 from disk(s): 15(XIFENFEI_0015)
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483663] [258] [56 != 0]
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483663] [258] [56 != 0]
NOTE: cache initiating offline of disk 15 group XIFENFEI
NOTE: process _x000_+asm2 (28639240) initiating offline of disk 
    15.1717056824 (XIFENFEI_0015) with mask 0x7e in group 4
NOTE: initiating PST update: grp = 4, dsk = 15/0x66583538, mask = 0x6a, op = clear
GMON updating disk modes for group 4 at 23 for pid 28, osid 28639240
ERROR: Disk 15 cannot be offlined, since diskgroup has external redundancy.
ERROR: too many offline disks in PST (grp 4)
Sun Nov 24 04:42:27 2019
NOTE: cache dismounting (not clean) group 4/0x0B08C40B (XIFENFEI) 
WARNING: Offline for disk XIFENFEI_0015 in mode 0x7f failed.
Sun Nov 24 04:42:27 2019
NOTE: halting all I/Os to diskgroup 4 (XIFENFEI)
NOTE: messaging CKPT to quiesce pins Unix process pid: 59441780, image: oracle@xifenfei2 (B000)
Sun Nov 24 04:42:27 2019
ERROR: ORA-15130 thrown in ARB0 for group number 4
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM2/trace/+ASM2_arb0_50856926.trc:
ORA-15130: diskgroup "XIFENFEI" is being dismounted

至此两个节点的该磁盘组就陷入了不停的mount,然后dismount的轮流循环中.这里我们可以大概的分析出来,由于vg的磁盘组被写入了数据或者强制剔除的时候导致asm写入该文件的数据被破坏,导致后续的asm reblance遭遇坏块,然后直接dismount.对于该问题的解决方案,通过对对该磁盘组的acd和cod进行patch,让其不进行reblance,保持该磁盘组现在,稳定的mount状态,然后对其数据进行备份和重建该磁盘组.这个客户运气不错,vg中的asm disk磁盘写入较少,数据库运行正常.
对于这种情况,如果发生极端损坏,比如asm磁盘组无法mount,可以参考:找回ASM中数据文件
如果是asm的元数据大量损坏,无法通过asm字典级别恢复,可以通过参考:asm disk header 彻底损坏恢复