Error in invoking target ‘libasmclntsh19.ohso libasmperl19.ohso client_sharedlib’问题处理

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:Error in invoking target ‘libasmclntsh19.ohso libasmperl19.ohso client_sharedlib’问题处理

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

最近在redhat 8.x系列系统中安装Oracle 19c,编译的过程中出现类似:[FATAL] Error in invoking target ‘libasmclntsh19.ohso libasmperl19.ohso client_sharedlib’ of makefile ‘/u01/app/oracle/product/19c/db_1/rdbms/lib/ins_rdbms.mk’.错误

[oracle@xifenfei db_1]$ ./runInstaller -ignorePrereq -waitforcompletion -silent \
>   oracle.install.option=INSTALL_DB_SWONLY \
>   UNIX_GROUP_NAME=oinstall \
>   INVENTORY_LOCATION=${ORACLE_BASE}/oraInventory \
>   ORACLE_HOME=${ORACLE_HOME} \
>   ORACLE_BASE=${ORACLE_BASE} \
>   oracle.install.db.InstallEdition=EE \
>   oracle.install.db.OSDBA_GROUP=dba \
>   oracle.install.db.OSBACKUPDBA_GROUP=backupdba \
>   oracle.install.db.OSDGDBA_GROUP=dgdba \
>   oracle.install.db.OSKMDBA_GROUP=kmdba \
>   oracle.install.db.OSRACDBA_GROUP=dba \
>   SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
>   DECLINE_SECURITY_UPDATES=true
Launching Oracle Database Setup Wizard...

[WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. 
/u01/app/oraInventory/logs/InstallActions2025-06-02_05-13-46PM/installActions2025-06-02_05-13-46PM.log
   ACTION: Identify the list of failed prerequisite checks from the log: 
/u01/app/oraInventory/logs/InstallActions2025-06-02_05-13-46PM/installActions2025-06-02_05-13-46PM.log. 
Then either from the log file or from installation manual 
find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
 /u01/app/oracle/product/19c/db_1/install/response/db_2025-06-02_05-13-46PM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/InstallActions2025-06-02_05-13-46PM/installActions2025-06-02_05-13-46PM.log
[FATAL] Error in invoking target 'libasmclntsh19.ohso libasmperl19.ohso client_sharedlib' of 
makefile '/u01/app/oracle/product/19c/db_1/rdbms/lib/ins_rdbms.mk'. See 
'/u01/app/oraInventory/logs/InstallActions2025-06-02_05-13-46PM/installActions2025-06-02_05-13-46PM.log' for details.

查看日志中具体信息

INFO:
/usr/bin/ld
INFO:
: cannot find -lclntsh

INFO:
make[2]: *** [/u01/app/oracle/product/19c/db_1/rdbms/lib/env_rdbms.mk:5232: dlopenlib] Error 1

INFO:
make[2]: Leaving directory '/u01/app/oracle/product/19c/db_1/rdbms/lib'

INFO:
make[1]: *** [/u01/app/oracle/product/19c/db_1/rdbms/lib/env_rdbms.mk:5210: 
 /u01/app/oracle/product/19c/db_1/lib/libasmperl19.so] Error 2

INFO:
make[1]: Leaving directory '/u01/app/oracle/product/19c/db_1/rdbms/lib'

INFO:
make: *** [/u01/app/oracle/product/19c/db_1/rdbms/lib/env_rdbms.mk:5247: libasmperl19.ohso] Error 2

INFO: End output from spawned process.
INFO: ----------------------------------
INFO: Exception thrown from action: make
Exception Name: MakefileException
Exception String: Error in invoking target 'libasmclntsh19.ohso libasmperl19.ohso client_sharedlib' of makefile 
'/u01/app/oracle/product/19c/db_1/rdbms/lib/ins_rdbms.mk'. See 
'/u01/app/oraInventory/logs/InstallActions2025-06-02_05-13-46PM/installActions2025-06-02_05-13-46PM.log' for details.
Exception Severity: 1
INFO:  [Jun 2, 2025 5:16:23 PM] Adding ExitStatus STOP_INSTALL to the exit status set
INFO:  [Jun 2, 2025 5:16:23 PM] Finding the most appropriate exit status for the current application
INFO:  [Jun 2, 2025 5:16:23 PM] inventory location is/u01/app/oraInventory
INFO:  [Jun 2, 2025 5:16:23 PM] Adding ExitStatus SUCCESS_WITH_WARNINGS to the exit status set
INFO:  [Jun 2, 2025 5:16:23 PM] Finding the most appropriate exit status for the current application
INFO:  [Jun 2, 2025 5:16:23 PM] Exit Status is -4
INFO:  [Jun 2, 2025 5:16:23 PM] Shutdown Oracle Database 19c Installer
INFO:  [Jun 2, 2025 5:16:23 PM] Unloading Setup Driver

提示缺少lclntsh,对应到数据中为libclntsh动态库文件,检查数据lib中相关文件

[oracle@xifenfei lib]$ ls -ltr libclntsh*
lrwxrwxrwx 1 oracle oinstall      12 Jun  2 16:06 libclntsh.so.11.1 -> libclntsh.so
lrwxrwxrwx 1 oracle oinstall      12 Jun  2 16:06 libclntsh.so.10.1 -> libclntsh.so
lrwxrwxrwx 1 oracle oinstall      12 Jun  2 16:06 libclntsh.so.12.1 -> libclntsh.so
lrwxrwxrwx 1 oracle oinstall      12 Jun  2 16:06 libclntsh.so.18.1 -> libclntsh.so
lrwxrwxrwx 1 oracle oinstall      17 Jun  2 16:06 libclntsh.so -> libclntsh.so.19.1
-rwxr-xr-x 1 oracle oinstall 8057080 Jun  2 16:11 libclntshcore.so.19.1
lrwxrwxrwx 1 oracle oinstall      21 Jun  2 16:11 libclntshcore.so -> libclntshcore.so.19.1

发现libclntsh.so.*.1都软连接到 libclntsh.so.19.1 而 libclntsh.so.19.1这个文件本身丢失,从安装介质中找到该文件并传输到lib中,修改权限

[oracle@xifenfei ~]$ cd $ORACLE_HOME/lib
[oracle@xifenfei lib]$ cp /tmp/libclntsh.so.19.1 ./
[oracle@xifenfei lib]$ chmod 777 libclntsh.so.19.1
[oracle@xifenfei lib]$ ls -ltr libclntsh*
lrwxrwxrwx 1 oracle oinstall       12 Jun  2 17:31 libclntsh.so.10.1 -> libclntsh.so
lrwxrwxrwx 1 oracle oinstall       12 Jun  2 17:31 libclntsh.so.11.1 -> libclntsh.so
lrwxrwxrwx 1 oracle oinstall       12 Jun  2 17:31 libclntsh.so.12.1 -> libclntsh.so
lrwxrwxrwx 1 oracle oinstall       12 Jun  2 17:31 libclntsh.so.18.1 -> libclntsh.so
lrwxrwxrwx 1 oracle oinstall       17 Jun  2 17:34 libclntsh.so -> libclntsh.so.19.1
lrwxrwxrwx 1 oracle oinstall       21 Jun  2 17:34 libclntshcore.so -> libclntshcore.so.19.1
-rwxr-xr-x 1 oracle oinstall 82573024 Jun  2 17:34 libclntsh.so.19.1
-rwxr-xr-x 1 oracle oinstall  8057080 Jun  2 17:34 libclntshcore.so.19.1

后续重新执行runInstaller相关命令安装正常,这个问题本质是由于libclntsh.so.19.1文件丢失导致,我查看了unzip解压日志,发现是解压出来了该文件的,具体什么原因丢失未知
233759


ORA-01171: datafile N going offline due to error advancing checkpoint

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:ORA-01171: datafile N going offline due to error advancing checkpoint

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

最近接到一个客户有一个数据文件offline的恢复咨询,通过分析日志,当时是由于在启动的时候数据文件被占用导致后续数据库open之后,该文件被强制offline掉

Fri May 16 20:01:05 2025
Database mounted in Exclusive Mode
Completed: ALTER DATABASE   MOUNT
Fri May 16 20:01:05 2025
ALTER DATABASE OPEN
Fri May 16 20:01:06 2025
LGWR: STARTING ARCH PROCESSES
ARC0 started with pid=70, OS id=4628
Fri May 16 20:01:06 2025
ARC0: Archival started
ARC1 started with pid=74, OS id=4840
Fri May 16 20:01:06 2025
ARC1: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
Fri May 16 20:01:06 2025
Errors in file d:\oracle\product\10.2.0\admin\orcl\bdump\orcl_lgwr_4080.trc:
ORA-01110: data file 14: 'D:\ORADATA\XIFENFEI105_DAT_1.DBF'
ORA-01114: IO error writing block to file 14 (block # 1)
ORA-27041: unable to open file
OSD-04002: 无法打开文件
O/S-Error: (OS 32) 另一个程序正在使用此文件,进程无法访问。

Thread 1 opened at log sequence 172421
  Current log# 1 seq# 172421 mem# 0: D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG
Fri May 16 20:01:06 2025
ARC1: STARTING ARCH PROCESSES
Fri May 16 20:01:06 2025
Successful open of redo thread 1
Fri May 16 20:01:06 2025
ARC0: Becoming the 'no FAL' ARCH
ARC0: Becoming the 'no SRL' ARCH
Fri May 16 20:01:06 2025
ARC2: Archival started
ARC1: STARTING ARCH PROCESSES COMPLETE
ARC2 started with pid=78, OS id=4056
Fri May 16 20:01:06 2025
ARC1: Becoming the heartbeat ARCH
Fri May 16 20:01:06 2025
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Fri May 16 20:01:06 2025
SMON: enabling cache recovery
Fri May 16 20:01:07 2025
Successfully onlined Undo Tablespace 1.
Fri May 16 20:01:07 2025
SMON: enabling tx recovery
Fri May 16 20:01:08 2025
Database Characterset is ZHS16GBK
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=86, OS id=4492
Fri May 16 20:01:12 2025
db_recovery_file_dest_size of 51200 MB is 1.97% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Fri May 16 20:01:13 2025
Completed: ALTER DATABASE OPEN
Fri May 16 20:06:44 2025
Restarting dead background process MMON
MMON started with pid=98, OS id=4232
Fri May 16 20:07:06 2025
Shutting down archive processes
Fri May 16 20:07:11 2025
ARCH shutting down
ARC2: Archival stopped
Fri May 16 20:10:32 2025
Thread 1 advanced to log sequence 172422
  Current log# 2 seq# 172422 mem# 0: D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG
Fri May 16 20:15:33 2025
Errors in file d:\oracle\product\10.2.0\admin\orcl\bdump\orcl_ckpt_2496.trc:
ORA-01171: datafile 14 going offline due to error advancing checkpoint
ORA-01122: database file 14 failed verification check
ORA-01110: data file 14: 'D:\ORADATA\XIFENFEI105_DAT_1.DBF'
ORA-01208: data file is an old version - not accessing current version

Fri May 16 20:23:09 2025
Starting background process EMN0
EMN0 started with pid=82, OS id=2660

通过dbv检查报错文件,确认被offline文件本身正常
dbv


本身这个故障相对比较简单,只要归档存在直接recover datafile,然后online即可,但是由于备份软件定时工作,导致对应的归档被备份走

Fri May 16 21:55:10 2025
Control autobackup written to SBT_TAPE device
	comment 'API Version 2.0,MMS Version 10.0.0.116',
	media 'V_6746190_6959024'
	handle 'c-1300253653-20250516-00'
Fri May 16 21:56:03 2025
Thread 1 cannot allocate new log, sequence 172423
Private strand flush not complete
  Current log# 2 seq# 172422 mem# 0: D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG

而且被异常的数据文件不是核心业务文件,导致客户没有及时发现,等到发现之时尝试recover datafile,提示缺少归档

Wed May 28 17:26:01 2025
alter database recover datafile list clear
Wed May 28 17:26:01 2025
Completed: alter database recover datafile list clear
Wed May 28 17:26:01 2025
alter database recover if needed
 datafile 14

Media Recovery Start
 parallel recovery started with 16 processes
ORA-279 signalled during: alter database recover if needed
 datafile 14
...
Wed May 28 17:26:11 2025
alter database recover cancel
Wed May 28 17:26:13 2025
Media Recovery Canceled
Completed: alter database recover cancel
Wed May 28 17:38:58 2025
ALTER DATABASE RECOVER  datafile 'D:\ORADATA\XIFENFEI105_DAT_1.DBF'  
Wed May 28 17:38:58 2025
Media Recovery Start
 parallel recovery started with 16 processes
ORA-279 signalled during: ALTER DATABASE RECOVER  datafile 'D:\ORADATA\XIFENFEI105_DAT_1.DBF'  ...
Wed May 28 18:26:37 2025
ALTER DATABASE RECOVER    CONTINUE DEFAULT  
Wed May 28 18:26:38 2025
Media Recovery Log D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2025_05_28\O1_MF_1_172421_%U_.ARC
Errors with log D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2025_05_28\O1_MF_1_172421_%U_.ARC
ORA-308 signalled during: ALTER DATABASE RECOVER    CONTINUE DEFAULT  ...
Wed May 28 18:26:38 2025
ALTER DATABASE RECOVER    CONTINUE DEFAULT  
Wed May 28 18:26:38 2025
Media Recovery Log D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2025_05_28\O1_MF_1_172421_%U_.ARC
Errors with log D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2025_05_28\O1_MF_1_172421_%U_.ARC
ORA-308 signalled during: ALTER DATABASE RECOVER    CONTINUE DEFAULT  ...
Wed May 28 18:26:38 2025
ALTER DATABASE RECOVER CANCEL 
Wed May 28 18:26:40 2025
Media Recovery Canceled
Completed: ALTER DATABASE RECOVER CANCEL 

这个客户运气还不错,带库中的需要恢复的归档日志都还在,通过指定带库通道,直接recover datafile成功

RUN {
  ALLOCATE CHANNEL ch1 DEVICE TYPE 'sbt_tape' 
  PARMS="BLKSIZE=262144,ENV=(CV_mmsApiVsn=2,CV_channelPar=ch1)";
  ALLOCATE CHANNEL ch2 DEVICE TYPE 'sbt_tape' 
  PARMS="BLKSIZE=262144,ENV=(CV_mmsApiVsn=2,CV_channelPar=ch2)";
 recover datafile 14;
}

rec
ok


至此完美解决该问题,通过这个case,的出来的经验有:
1. 数据库重启之后,要检查数据库日志和查询数据库数据文件状态(主要防止一些不太常用的文件异常,不能及时发现)
2. 需要需要数据库的基本情况,比如备份,容灾,asm磁盘组冗余,存储冗余,网络冗余等情况,这样出现问题好排查解决

ORA-600 ksvworkmsgalloc: bad reaper

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:ORA-600 ksvworkmsgalloc: bad reaper

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

有一个朋友说他们想把12c的库还原到19c版本中然后进行升级测试,结果在打开库的过程中发现几个错误,让我给帮忙分析下
resetlogs 报ORA-00392 ORA-00312

SQL> alter database open resetlogs upgrade;
alter database open resetlogs upgrade
*
ERROR at line 1:
ORA-00392: log 7 of thread 1 is being cleared, operation not allowed
ORA-00312: online log 7 thread 1: '/DBS1/data/NDBS/onlinelog/redo07_m1.log '
ORA-00312: online log 7 thread 1: '/DBS1/arch/NDBS/onlinelog/redo07_m2.log '

这个错误一般是由于redo状态不对,比如标记为了CLEARING_CURRENT,处理操作

SQL> select group#,status from v$log;

          GROUP# STATUS
---------------- ----------------
               1 CLEARING
               2 CLEARING
               3 CLEARING
               4 CLEARING
              10 CLEARING
               6 CLEARING
               7 CLEARING_CURRENT
               8 CLEARING
               9 CLEARING
               5 CLEARING

10 rows selected.


SQL> alter database clear logfile group 7;

Database altered.

SQL> select group#,status from v$log;

          GROUP# STATUS
---------------- ----------------
               1 CLEARING
               2 CLEARING
               3 CLEARING
               4 CLEARING
              10 CLEARING
               6 CLEARING
               7 CURRENT
               8 CLEARING
               9 CLEARING
               5 CLEARING

10 rows selected.

再次reseltogs报ORA-600 ksvworkmsgalloc: bad reaper错误

SQL> alter database open resetlogs upgrade;
alter database open resetlogs upgrade
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [ksvworkmsgalloc: bad reaper], [0x080010003], [], [], []

这个错误通过查询MOS 发现Open Resetlogs Fail with ORA-00600[ksvworkmsgalloc: bad reaper] (Doc ID 2728106.1)文章中描述,由于non-ASM to ASM环境redo文件在clear的时候触发该问题
KSVWORKMSGALLOW


是由于db_create_online_log_dest_1参数没有设置导致,对于该库是由asm环境到文件系统,估计也是在resetlogs的时候clear redo报出来该错误,解决办法给该库设置上
db_create_online_log_dest_1=/DBS1/data,db_create_online_log_dest_2=/DBS1/arch,然后打开库成功
QQ20250519-231821

ORA-600 krccfl_chunk故障处理

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:ORA-600 krccfl_chunk故障处理

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

一个数据库启动包ORA-600 krccfl_chunk错误

2025-05-06T10:37:47.428203+08:00
Completed: ALTER DATABASE MOUNT /* db agent *//* {2:50212:2} */
ALTER DATABASE OPEN /* db agent *//* {2:50212:2} */
2025-05-06T10:37:47.433709+08:00
This instance was first to open
Block change tracking file is current.
Ping without log force is disabled:
  not an Exadata system.
start recovery: pdb 0, passed in flags x4 (domain enable 5) 
2025-05-06T10:37:48.203383+08:00
Beginning crash recovery of 2 threads
2025-05-06T10:37:48.568120+08:00
 parallel recovery started with 32 processes
2025-05-06T10:37:48.610951+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.611037+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.611243+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.611438+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.614947+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.616591+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.617188+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.617253+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.617428+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.617606+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.617676+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.617809+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.636568+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.636568+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.636620+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.637156+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.637300+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.637881+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.637999+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.638112+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.638241+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.638304+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.638338+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.638347+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.641621+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.642926+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.643092+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.643192+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.643204+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.643372+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.643516+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.643573+08:00
start recovery: pdb 0, passed in flags x5 (domain enable 5) 
2025-05-06T10:37:48.748956+08:00
Started redo scan
2025-05-06T10:37:49.849382+08:00
Completed redo scan
 read 469347 KB redo, 1213 data blocks need recovery
2025-05-06T10:37:50.007840+08:00
Started redo application at
 Thread 1: logseq 369323, block 651514, offset 0
 Thread 2: logseq 132962, block 1319944, offset 0
2025-05-06T10:37:50.016910+08:00
Recovery of Online Redo Log: Thread 1 Group 13 Seq 369323 Reading mem 0
  Mem# 0: +DATA/orcl/ONLINELOG/group_13.349.978709791
  Mem# 1: +FRA/orcl/ONLINELOG/group_13.12992.978709793
2025-05-06T10:37:50.025725+08:00
Recovery of Online Redo Log: Thread 2 Group 18 Seq 132962 Reading mem 0
  Mem# 0: +DATA/orcl/ONLINELOG/group_18.354.978710003
  Mem# 1: +FRA/orcl/ONLINELOG/group_18.12997.978710005
2025-05-06T10:37:51.063556+08:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl2/trace/orcl2_ora_68031.trc(incident=868005)(PDBNAME=CDB$ROOT):
ORA-00600: internal error code, arguments: [krccfl_chunk], [0x7F9BBB30BE58], [166528],[],[],[],[],[],[],[],[],[]
Incident details in: /u01/app/oracle/diag/rdbms/orcl/orcl2/incident/incdir_868005/orcl2_ora_68031_i868005.trc
2025-05-06T10:37:52.269823+08:00
Dumping diagnostic data in directory=[cdmp_20250506103752],requested by(instance=2,osid=68031),summary=[incident=868005].
2025-05-06T10:37:52.306517+08:00
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2025-05-06T10:37:52.310723+08:00
Slave encountered ORA-10388 exception during crash recovery
2025-05-06T10:37:52.310813+08:00
Slave encountered ORA-10388 exception during crash recovery
2025-05-06T10:37:52.310820+08:00
Slave encountered ORA-10388 exception during crash recovery
2025-05-06T10:37:52.310853+08:00
Slave encountered ORA-10388 exception during crash recovery
2025-05-06T10:37:52.310902+08:00
Slave encountered ORA-10388 exception during crash recovery
2025-05-06T10:37:52.310907+08:00
Slave encountered ORA-10388 exception during crash recovery
2025-05-06T10:37:52.310945+08:00
Recovery slave process is holding some recovery locks. Killing the instance now.
2025-05-06T10:37:52.310950+08:00
Slave encountered ORA-10388 exception during crash recovery
2025-05-06T10:37:52.310987+08:00
Slave encountered ORA-10388 exception during crash recovery
2025-05-06T10:37:52.311002+08:00
Slave encountered ORA-10388 exception during crash recovery
2025-05-06T10:37:52.311009+08:00
Slave encountered ORA-10388 exception during crash recovery
2025-05-06T10:37:52.311017+08:00
Slave encountered ORA-10388 exception during crash recovery
2025-05-06T10:37:52.311055+08:00
Recovery slave process is holding some recovery locks. Killing the instance now.
2025-05-06T10:37:52.311055+08:00
Slave encountered ORA-10388 exception during crash recovery
2025-05-06T10:37:52.311064+08:00
Slave encountered ORA-10388 exception during crash recovery
2025-05-06T10:37:52.311071+08:00
Recovery slave process is holding some recovery locks. Killing the instance now.
2025-05-06T10:37:52.311080+08:00
Recovery slave process is holding some recovery locks. Killing the instance now.
2025-05-06T10:37:52.311107+08:00
Recovery slave process is holding some recovery locks. Killing the instance now.
2025-05-06T10:37:52.311119+08:00
Recovery slave process is holding some recovery locks. Killing the instance now.
2025-05-06T10:37:52.311126+08:00
Recovery slave process is holding some recovery locks. Killing the instance now.
2025-05-06T10:37:52.311135+08:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl2/trace/orcl2_p000_69617.trc:
ORA-10388: parallel query server interrupt (failure)
2025-05-06T10:37:52.311156+08:00
Recovery slave process is holding some recovery locks. Killing the instance now.
2025-05-06T10:37:52.311184+08:00
Recovery slave process is holding some recovery locks. Killing the instance now.
2025-05-06T10:37:52.311203+08:00
Recovery slave process is holding some recovery locks. Killing the instance now.
2025-05-06T10:37:52.311205+08:00
Recovery slave process is holding some recovery locks. Killing the instance now.
2025-05-06T10:37:52.311211+08:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl2/trace/orcl2_p001_69619.trc:
ORA-10388: parallel query server interrupt (failure)
2025-05-06T10:37:52.311276+08:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl2/trace/orcl2_p002_69621.trc:
ORA-10388: parallel query server interrupt (failure)
2025-05-06T10:37:52.311276+08:00
Recovery slave process is holding some recovery locks. Killing the instance now.
2025-05-06T10:37:52.311280+08:00
Recovery slave process is holding some recovery locks. Killing the instance now.
2025-05-06T10:37:52.311308+08:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl2/trace/orcl2_p003_69623.trc:
ORA-10388: parallel query server interrupt (failure)
2025-05-06T10:37:52.311329+08:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl2/trace/orcl2_p004_69625.trc:
ORA-10388: parallel query server interrupt (failure)
2025-05-06T10:37:52.311341+08:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl2/trace/orcl2_p005_69627.trc:
ORA-10388: parallel query server interrupt (failure)
2025-05-06T10:37:52.311345+08:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl2/trace/orcl2_p007_69631.trc:
ORA-10388: parallel query server interrupt (failure)
2025-05-06T10:37:52.311353+08:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl2/trace/orcl2_p008_69633.trc:
ORA-10388: parallel query server interrupt (failure)
2025-05-06T10:37:52.311374+08:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl2/trace/orcl2_p006_69629.trc:
ORA-10388: parallel query server interrupt (failure)
2025-05-06T10:37:52.311386+08:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl2/trace/orcl2_p009_69635.trc:
ORA-10388: parallel query server interrupt (failure)
2025-05-06T10:37:52.311402+08:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl2/trace/orcl2_p00a_69637.trc:
ORA-10388: parallel query server interrupt (failure)
2025-05-06T10:37:52.311513+08:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl2/trace/orcl2_p00c_69641.trc:
ORA-10388: parallel query server interrupt (failure)
2025-05-06T10:37:52.311515+08:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl2/trace/orcl2_p00b_69639.trc:
ORA-10388: parallel query server interrupt (failure)
2025-05-06T10:37:52.348331+08:00
USER (ospid: 69617): terminating the instance due to error 10388
2025-05-06T10:37:52.585589+08:00
System state dump requested by (instance=2, osid=69617 (P000)), summary=[abnormal instance termination].
System State dumped to trace file /u01/app/oracle/diag/rdbms/orcl/orcl2/trace/orcl2_diag_67490_20250506103752.trc
2025-05-06T10:37:54.016704+08:00
License high water mark = 34
2025-05-06T10:37:55.387072+08:00
Instance terminated by USER, pid = 69617
2025-05-06T10:37:55.388683+08:00
Warning: 2 processes are still attach to shmid 2850830:
 (size: 45056 bytes, creator pid: 65902, last attach/detach pid: 67492)
2025-05-06T10:37:56.018027+08:00
USER (ospid: 69907): terminating the instance
2025-05-06T10:37:56.021711+08:00
Instance terminated by USER, pid = 69907

查询mos发现类似文章:
Database doesn’t open after crash ORA-00600 [krccfl_chunk] (Doc ID 2967548.1)
Bug 33251482 – ORA-487 / ORA-600 [krccfl_chunk] : CTWR process terminated during PDB creation (Doc ID 33251482.8)
ORA-600-krccfl_chunk


分析这个客户情况,通过trace信息:Block change tracking file is current. 可以确认是启用了BCT,而且日志信息也反应出来是pdb环境。进一步分析客户的情况,发现他们在以前有一个数据文件创建到了本地(实际是rac环境)

2024-12-23T11:07:09.168322+08:00
PDBODS(5):Completed: alter tablespace PDBODS_DATA add datafile 'D:\APP\ADMINISTRATOR\ORADATA\ORCL\USERS02.DBF'
 size 5000M autoextend on next 1000M maxsize 32000M

数据库中现在实际存储路径/u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/D:APPADMINISTRATORORADATAORCLUSERS 02.DBF
基于这种情况,解决问题比较简单:在本地数据文件所在节点禁用BCT,然后open库,把数据文件拷贝到asm中即可

ORA-600 kddummy_blkchk 数据库循环重启

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:ORA-600 kddummy_blkchk 数据库循环重启

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

一个运行在hp平台的10g rac突然异常,之后运行一段时间就自动重启,客户让对其进行分析和解决

Thu May  8 06:23:21 2025
ALTER DATABASE OPEN
Picked broadcast on commit scheme to generate SCNs
Thu May  8 06:23:21 2025
Thread 1 opened at log sequence 74302
  Current log# 1 seq# 74302 mem# 0: /dev/vgdata/rrac_redo01
Successful open of redo thread 1
Thu May  8 06:23:21 2025
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Thu May  8 06:23:21 2025
SMON: enabling cache recovery
Thu May  8 06:23:22 2025
Successfully onlined Undo Tablespace 1.
Thu May  8 06:23:22 2025
SMON: enabling tx recovery
Thu May  8 06:23:22 2025
Database Characterset is ZHS16CGB231280
Opening with internal Resource Manager plan
where NUMA PG = 1, CPUs = 4
replication_dependency_tracking turned off (no async multimaster replication found)
Thu May  8 06:23:23 2025
Errors in file /users/oracle/admin/orcl/bdump/orcl1_smon_15721.trc:
ORA-00600: internal error code, arguments: [kddummy_blkchk], [118], [333578], [18019], [], [], [], []
Starting background process QMNC
QMNC started with pid=22, OS id=15792
Thu May  8 06:23:25 2025
ORACLE Instance orcl1 (pid = 13) - Error 607 encountered while recovering transaction (9, 33) on object 775794.
Thu May  8 06:23:25 2025
Errors in file /users/oracle/admin/orcl/bdump/orcl1_smon_15721.trc:
ORA-00607: Internal error occurred while making a change to a data block
ORA-00600: internal error code, arguments: [kddummy_blkchk], [118], [333578], [18019], [], [], [], []
Thu May  8 06:23:26 2025
Completed: ALTER DATABASE OPEN
Thu May  8 06:23:26 2025
Doing block recovery for file 118 block 333578
Block recovery from logseq 74302, block 22 to scn 46740761996
Thu May  8 06:23:26 2025
Recovery of Online Redo Log: Thread 1 Group 1 Seq 74302 Reading mem 0
  Mem# 0: /dev/vgdata/rrac_redo01
Block recovery stopped at EOT rba 74302.33.16
Block recovery completed at rba 74302.33.16, scn 10.3791089036
Thu May  8 06:23:33 2025
Trace dumping is performing id=[cdmp_20250508062324]
Thu May  8 06:25:55 2025
Errors in file /users/oracle/admin/orcl/bdump/orcl1_smon_15721.trc:
ORA-00600: internal error code, arguments: [kddummy_blkchk], [118], [333578], [18019], [], [], [], []
Thu May  8 06:25:58 2025
ORACLE Instance orcl1 (pid = 13) - Error 607 encountered while recovering transaction (9, 33) on object 775794.
Thu May  8 06:27:32 2025
Errors in file /users/oracle/admin/orcl/bdump/orcl1_smon_15721.trc:
ORA-00600: internal error code, arguments: [kddummy_blkchk], [118], [333578], [18019], [], [], [], []
Errors in file /users/oracle/admin/orcl/bdump/orcl1_smon_15721.trc:
Doing block recovery for file 118 block 333578
Block recovery from logseq 74302, block 372 to scn 46740952565
Thu May  8 06:27:41 2025
Errors in file /users/oracle/admin/orcl/bdump/orcl1_smon_15721.trc:
ORA-00600: internal error code, arguments: [kddummy_blkchk], [118], [333578], [18019], [], [], [], []
ORACLE Instance orcl1 (pid = 13) - Error 607 encountered while recovering transaction (9, 33) on object 775794.
Thu May  8 06:27:43 2025
Errors in file /users/oracle/admin/orcl/bdump/orcl1_smon_15721.trc:
ORA-00607: Internal error occurred while making a change to a data block
ORA-00600: internal error code, arguments: [kddummy_blkchk], [118], [333578], [18019], [], [], [], []
Doing block recovery for file 118 block 333578
Block recovery from logseq 74302, block 372 to scn 46740952565
Thu May  8 06:27:45 2025
Recovery of Online Redo Log: Thread 1 Group 1 Seq 74302 Reading mem 0
  Mem# 0: /dev/vgdata/rrac_redo01
Block recovery completed at rba 74302.394.16, scn 10.3791279606
Thu May  8 06:27:47 2025
Errors in file /users/oracle/admin/orcl/bdump/orcl1_smon_15721.trc:
ORA-00600: internal error code, arguments: [kddummy_blkchk], [118], [333578], [18019], [], [], [], []
Thu May  8 06:28:07 2025
Errors in file /users/oracle/admin/orcl/bdump/orcl1_pmon_15690.trc:
ORA-00474: SMON process terminated with error
Thu May  8 06:28:07 2025
PMON: terminating instance due to error 474

这个数据库重启是由于smon进程异常导致数据库关闭,然后rac自动拉起数据库,从而出现了循环重启启动,smon异常的主要原因是由于“ORACLE Instance orcl1 (pid = 13) – Error 607 encountered while recovering transaction (9, 33) on object 775794.”这个表示有实物要回滚,但是遇到了ORA-600 kddummy_blkchk异常,无法完成回滚从而使得smon进程异常进而使得数据实例crash.对于这个问题处理相对比较简单
1. 通过ORA-600 kddummy_blkchk找出来报错对象

ORA-00600: internal error code, arguments: [kddummy_blkchk], [a], [b], 1

ARGUMENTS:
Arg [a] Absolute file number
Arg [b] Block number
Arg 1 Internal error code returned from kcbchk() which indicates the problem encountered.

2. 屏蔽实物回滚,打开数据库,然后对通过dba_extents 查询出来的对象/或者根据事务报错的object信息查询出来对象进行重建,完成本次恢复任务

关于ORA-00600: internal error code, arguments: [kddummy_blkchk], [a], [b], 相关目前已知bug

Bug Fixed Description
12349316 11.2.0.4, 12.1.0.2, 12.2.0.1 DBMS_SPACE_ADMIN.TABLESPACE_FIX_BITMAPS fails with ORA-600 [kddummy_blkchk] / ORA-600 [kdBlkCheckError] / ORA-607
17325413 11.2.0.3.BP23, 11.2.0.4.2, 11.2.0.4.BP04, 12.1.0.1.3, 12.1.0.2, 12.2.0.1 Drop column with DEFAULT value and NOT NULL definition ends up with Dropped Column Data still on Disk leading to Corruption
13715932 11.2.0.4, 12.1.0.1 ORA-600 [kddummy_blkchk] [18038] while adding extents to a large datafile
12417369 11.2.0.2.5, 11.2.0.2.BP13, 11.2.0.2.GIPSU05, 11.2.0.3, 12.1.0.1 Block corruption from rollback on compressed table
10324526 10.2.0.5.4, 11.1.0.7.8, 11.2.0.2.3, 11.2.0.2.BP06, 11.2.0.3, 12.1.0.1 ORA-600 [kddummy_blkchk] [6106] / corruption on COMPRESS table in TTS
10113224 11.2.0.3, 12.1.0.1 Index coalesce may generate invalid redo if blocks in the buffer cache are invalid/corrupted
9726702 11.2.0.3, 12.1.0.1 DBMS_SPACE_ADMIN.assm_segment_verify reports HWM related inconsistencies
9724970 11.2.0.1.BP08, 11.2.0.2.2, 11.2.0.2.BP02, 11.2.0.3, 12.1.0.1 Block Corruption with PDML UPDATE. ORA_600 [4511] OERI[kdblkcheckerror] by block check
9711859 10.2.0.5.1, 11.1.0.7.6, 11.2.0.2, 12.1.0.1 ORA-600 [ktsptrn_fix-extmap] / ORA-600 [kdblkcheckerror] during extent allocation caused by bug 8198906
9581240 11.1.0.7.9, 11.2.0.2, 12.1.0.1 Corruption / ORA-600 [kddummy_blkchk] [6101] / ORA-600 [7999] after RENAME operation during ROLLBACK
9350204 11.2.0.3, 12.1.0.1 Spurious ORA-600 [kddummy_blkchk] .. [6145] during CR operations on tables with ROWDEPENDENCIES
9231605 11.1.0.7.4, 11.2.0.1.3, 11.2.0.1.BP02, 11.2.0.2, 12.1.0.1 Block corruption with missing row on a compressed table after DELETE
9119771 11.2.0.2, 12.1.0.1 OERI [kddummy_blkchk]…[6108] from ‘SHRINK SPACE CASCADE’
9019113 11.2.0.1.BP02, 11.2.0.2, 12.1.0.1 ORA-600 [17182] ORA-7445 [memcpy] ORA-600 [kdBlkCheckError] for OLTP COMPRESS table in OLTP Compression REDO during RECOVERY
8951812 11.2.0.2, 12.1.0.1 Corrupt index by rebuild online. Possible OERI [kddummy_blkchk] by SMON
8720802 10.2.0.5, 11.2.0.1.BP07, 11.2.0.2, 12.1.0.1 Add check for row piece pointing to itself (db_block_checking,dbv,rman,analyze)
8331063 11.2.0.3, 12.1.0.1 Corrupt Undo. ORA-600 [2015] in Undo Block During Rollback
6523037 11.2.0.1.BP07, 11.2.0.2.2, 11.2.0.2.BP01, 11.2.0.3, 12.1.0.1 Corruption / ORA-600 [kddummy_blkchk] [6110] on update
8277580 11.1.0.7.2, 11.2.0.1, 11.2.0.2, 12.1.0.1 Corruption on compressed tables during Recovery and Quick Multi Delete (QMD).
9964102 11.2.0.1 OERI:2015 / OERI:kddummy_blkchk / undo corruption from supplementat logging with compressed tables
8613137 11.1.0.7.2, 11.2.0.1 ORA-600 updating table with DEFERRED constraints
8360192 11.1.0.7.6, 11.2.0.1 ORA-600 [kdBlkCheckError] [6110] / corruption from insert
8239658 10.2.0.5, 11.2.0.1 Dump / corruption writing row to compressed table
8198906 10.2.0.5, 11.2.0.1 OERI [kddummy_blkchk] / OERI [5467] for an aborted transaction of allocating extents
7715244 11.1.0.7.2, 11.2.0.1 Corruption on compressed tables. Error codes 6103 / 6110
7662491 10.2.0.4.2, 10.2.0.5, 11.1.0.7.4, 11.2.0.1 Array Update can corrupt a row. Errors OERI[kghstack_free1] or OERI[kddummy_blkchk][6110]
7411865 10.2.0.4.2, 10.2.0.5, 11.1.0.7.1, 11.2.0.1 OERI:13030 / ORA-1407 / block corruption from UPDATE .. RETURNING DML with trigger
7331181 11.2.0.1 ORA-1555 or OERI [kddummy_blkchk] [file#] [block#] [6126] during CR Rollback in query
7293156 11.1.0.7, 11.2.0.1 ORA-600 [2023] by Parallel Transaction Rollback when applying Multi-block undo Head-piece / Tail-piece
7041254 11.1.0.7.5, 11.2.0.1 ORA-19661 during RMAN restore check logical of compressed backup / IOT dummy key
6760697 10.2.0.4.3, 10.2.0.5, 11.1.0.7, 11.2.0.1 DBMS_SPACE_ADMIN.ASSM_SEGMENT_VERIFY does not detect certain segment header block corruption
6647480 10.2.0.4.4, 10.2.0.5, 11.1.0.7.3, 11.2.0.1 Corruption / OERI [kddummy_blkchk] .. [18021] with ASSM
6134368 10.2.0.5, 11.2.0.1 ORA-1407 / block corruption from UPDATE .. RETURNING DML with trigger – SUPERCEDED
6057203 10.2.0.4, 11.1.0.7, 11.2.0.1 Corruption with zero length column (ZLC) / OERI [kcbchg1_6] from Parallel update
6653934 10.2.0.4.2, 10.2.0.5, 11.1.0.7 Dump / block corruption from ONLINE segment shrink with ROWDEPENDENCIES
6674196 10.2.0.4, 10.2.0.5, 11.1.0.6 OERI / buffer cache corruption using ASM, OCFS or any ksfd client like ODM
5599596 10.2.0.4, 11.1.0.6 Block corruption / OERI [kddummy_blkchk] on clustered or compressed tables
5496041 10.2.0.4, 11.1.0.6 OERI[6006] / index corruption on compressed index
5386204 10.2.0.4.1, 10.2.0.5, 11.1.0.6 Block corruption / OERI[kddummy_blkchk] after direct load of ASSM segment
5363584 10.2.0.4, 11.1.0.6 Array insert into table can corrupt redo
4602031 10.2.0.2, 11.1.0.6 Block corruption from UPDATE or MERGE into compressed table
4493447 11.1.0.6 Spurious ORA-600 [kddummy_blkchk] [file#] [block#] [6145] on rollback of array update
4329302 11.1.0.6 OERI [kddummy_blkchk] [file#] [block#] [6145] on rollback of update with logminer
6075487 10.2.0.4 OERI[kddummy_blkchk]..[18020/18026] for DDL on plugged ASSM tablespace with FLASHBACK
4054640 10.1.0.5, 10.2.0.1 Block corruption / OERI [kddummy_blkchk] at physical standby
4000840 10.1.0.4, 10.2.0.1, 9.2.0.7 Update of a row with more than 255 columns can cause block corruption
3772033 9.2.0.7, 10.1.0.4, 10.2.0.1 OERI[ktspfmb_create1] creating a LOB in ASSM using 2k blocksize