提供19.11(含202104patch)完整版db和grid下载

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:提供19.11(含202104patch)完整版db和grid下载

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

最近实施了一套19c rac并且打上patch 32545008(GI Update 202104)和32399816(OJVM Update 202104),通过createGoldImage 创建了安装程序,直接使用该zip包即可安装含gi/db(含ojvm) 2021年4月的patch

[oracle@dzbl1 ~]$ $ORACLE_HOME/runInstaller -createGoldImage -silent -destinationLocation /tmp/soft_img
Launching Oracle Database Setup Wizard...

Successfully Setup Software.
Gold Image location: /tmp/soft_img/db_home_2021-05-20_09-05-40PM.zip


[oracle@dzbl1 ~]$ exit
logout
[root@dzbl1 ~]# su - grid
Last login: Thu May 20 20:57:05 CST 2021
[grid@dzbl1 ~]$ ./gridSetup.sh -createGoldImage  -silent -destinationLocation /tmp/soft_img
-bash: ./gridSetup.sh: No such file or directory
[grid@dzbl1 ~]$ $ORACLE_HOME/gridSetup.sh -createGoldImage  -silent -destinationLocation /tmp/soft_img
Launching Oracle Grid Infrastructure Setup Wizard...

Successfully Setup Software.
Gold Image location: /tmp/soft_img/grid_home_2021-05-20_09-13-58PM.zip


[grid@dzbl1 ~]$ md5sum  /tmp/soft_img/grid_home_2021-05-20_09-13-58PM.zip
7cefb1be8ead8250435d5a95785d1239  /tmp/soft_img/grid_home_2021-05-20_09-13-58PM.zip
[grid@dzbl1 ~]$ md5sum /tmp/soft_img/db_home_2021-05-20_09-05-40PM.zip
325841792c44f168c524b440440773b0  /tmp/soft_img/db_home_2021-05-20_09-05-40PM.zip
[grid@dzbl1 ~]$ opatch lspatches
32585572;DBWLM RELEASE UPDATE 19.0.0.0.0 (32585572)
32584670;TOMCAT RELEASE UPDATE 19.0.0.0.0 (32584670)
32579761;OCW RELEASE UPDATE 19.11.0.0.0 (32579761)
32576499;ACFS RELEASE UPDATE 19.11.0.0.0 (32576499)
32545013;Database Release Update : 19.11.0.0.210420 (32545013)

OPatch succeeded.
[grid@dzbl1 ~]$ su - oracle
Password: 
Last login: Thu May 20 21:04:33 CST 2021 on pts/1
[oracle@dzbl1 ~]$ opatch lspatches
32399816;OJVM RELEASE UPDATE: 19.11.0.0.210420 (32399816)
32579761;OCW RELEASE UPDATE 19.11.0.0.0 (32579761)
32545013;Database Release Update : 19.11.0.0.210420 (32545013)

OPatch succeeded.
[oracle@dzbl1 ~]$ ls -l /tmp/soft_img/
total 9225956
-rw-r--r-- 1 oracle oinstall 4268265132 May 20 21:13 db_home_2021-05-20_09-05-40PM.zip
-rw-r--r-- 1 grid   oinstall 5179109549 May 20 21:21 grid_home_2021-05-20_09-13-58PM.zip
[oracle@dzbl1 ~]$ 

20210520212657


下载到win,并且按照oracle官方命名方式进程重命名,并且md5验证,确定文件完整性
20210520234704

C:\Users\XFF>CertUtil -hashfile E:\vm_shared\LINUX.X64_1911000_grid_home.zip md5
MD5 的 E:\vm_shared\LINUX.X64_1911000_grid_home.zip 哈希:
7cefb1be8ead8250435d5a95785d1239
CertUtil: -hashfile 命令成功完成。

C:\Users\XFF>CertUtil -hashfile E:\vm_shared\LINUX.X64_1911000_db_home.zip md5
MD5 的 E:\vm_shared\LINUX.X64_1911000_db_home.zip 哈希:
325841792c44f168c524b440440773b0
CertUtil: -hashfile 命令成功完成。

提供下载link,可以直接下载19.11完整版db和grid(该版本含2021年4月份patch):Oracle 19.11 database和grid软件下载,提取码为:bamf.下载之后请验证md5,确认没有别其他人修改.

公有云安装19c rac遇到问题—169网段udp异常

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:公有云安装19c rac遇到问题—169网段udp异常

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

应客户要求在xx公有云上面安装19c rac,通过各方的努力,最后安装情况如下
1. 两个节点root.sh执行成功,crs启动正常,asm磁盘组访问正常,但是有一个节点asm实例无法启动,一个节点的db实例无法启动

---节点1
[root@dzbl1 ~]# su - grid
Last login: Thu May 20 12:32:55 CST 2021
[grid@dzbl1 ~]$ ps -ef|grep ASM
grid       477     1  0 May19 ?        00:00:24 /u01/app/19c/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit
grid     22075 22039  0 12:42 pts/1    00:00:00 grep --color=auto ASM
[grid@dzbl1 ~]$ asmcmd
ASMCMD> lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304   1907344  1904420                0         1904420              0             N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304   1150344  1149032                0         1149032              0             N  FRA/
MOUNTED  EXTERN  N         512             512   4096  4194304     14304    13988                0           13988              0             Y  SYSTEMDG/
ASMCMD> exit
[grid@dzbl1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       dzbl1                    STABLE
               ONLINE  ONLINE       dzbl2                    STABLE
ora.chad
               ONLINE  ONLINE       dzbl1                    STABLE
               ONLINE  ONLINE       dzbl2                    STABLE
ora.net1.network
               ONLINE  ONLINE       dzbl1                    STABLE
               ONLINE  ONLINE       dzbl2                    STABLE
ora.ons
               ONLINE  ONLINE       dzbl1                    STABLE
               ONLINE  ONLINE       dzbl2                    STABLE
ora.proxy_advm
               OFFLINE OFFLINE      dzbl1                    STABLE
               OFFLINE OFFLINE      dzbl2                    STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       dzbl1                    STABLE
      2        ONLINE  ONLINE       dzbl2                    STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  ONLINE       dzbl2                    STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.FRA.dg(ora.asmgroup)
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  ONLINE       dzbl2                    STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       dzbl2                    STABLE
ora.SYSTEMDG.dg(ora.asmgroup)
      1        OFFLINE OFFLINE                               STABLE
      2        ONLINE  ONLINE       dzbl2                    STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  ONLINE       dzbl2                    Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       dzbl1                    STABLE
      2        ONLINE  ONLINE       dzbl2                    STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       dzbl2                    STABLE
ora.dzbl1.vip
      1        ONLINE  ONLINE       dzbl1                    STABLE
ora.dzbl2.vip
      1        ONLINE  ONLINE       dzbl2                    STABLE
ora.dzbldb.db
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  ONLINE       dzbl2                    Open,HOME=/u01/app/o
                                                             racle/product/19c/db
                                                             _1,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       dzbl2                    STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       dzbl2                    STABLE
--------------------------------------------------------------------------------
[grid@dzbl1 ~]$ 

---节点2
[grid@dzbl2 ~]$ ps -ef|grep ASM
grid      2464     1  0 May18 ?        00:00:29 /u01/app/19c/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit
grid      6826     1  0 May19 ?        00:00:09 oracle+ASM2_asmb_dzbldb2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid     14089     1  0 12:38 ?        00:00:00 asm_m000_+ASM2
grid     15670     1  0 12:40 ?        00:00:00 oracle+ASM2_crf (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid     16503     1  0 May18 ?        00:00:05 asm_pmon_+ASM2
grid     16505     1  0 May18 ?        00:00:04 asm_clmn_+ASM2
grid     16507     1  0 May18 ?        00:00:11 asm_psp0_+ASM2
grid     16518     1  0 12:42 ?        00:00:00 oracle+ASM2 (LOCAL=NO)
grid     16562     1  0 May18 ?        00:18:22 asm_vktm_+ASM2
grid     16567     1  0 May18 ?        00:00:08 asm_gen0_+ASM2
grid     16569     1  0 May18 ?        00:00:02 asm_mman_+ASM2
grid     16573     1  0 May18 ?        00:00:06 asm_gen1_+ASM2
grid     16577     1  0 May18 ?        00:01:13 asm_diag_+ASM2
grid     16579     1  0 May18 ?        00:00:04 asm_ping_+ASM2
grid     16581     1  0 May18 ?        00:00:09 asm_pman_+ASM2
grid     16583     1  0 May18 ?        00:03:08 asm_dia0_+ASM2
grid     16585     1  0 May18 ?        00:01:41 asm_lmon_+ASM2
grid     16587     1  0 May18 ?        00:01:55 asm_lmd0_+ASM2
grid     16589     1  0 May18 ?        00:04:26 asm_lms0_+ASM2
grid     16591     1  0 May18 ?        00:02:13 asm_lmhb_+ASM2
grid     16596     1  0 May18 ?        00:00:02 asm_lck1_+ASM2
grid     16598     1  0 May18 ?        00:00:02 asm_dbw0_+ASM2
grid     16600     1  0 May18 ?        00:00:02 asm_lgwr_+ASM2
grid     16602     1  0 May18 ?        00:00:05 asm_ckpt_+ASM2
grid     16604     1  0 May18 ?        00:00:01 asm_smon_+ASM2
grid     16606     1  0 May18 ?        00:00:02 asm_lreg_+ASM2
grid     16608     1  0 May18 ?        00:00:01 asm_pxmn_+ASM2
grid     16610     1  0 May18 ?        00:00:11 asm_rbal_+ASM2
grid     16612     1  0 May18 ?        00:00:24 asm_gmon_+ASM2
grid     16614     1  0 May18 ?        00:00:06 asm_mmon_+ASM2
grid     16616     1  0 May18 ?        00:00:47 asm_mmnl_+ASM2
grid     16618     1  0 May18 ?        00:02:52 asm_imr0_+ASM2
grid     16627     1  0 May18 ?        00:00:30 asm_scm0_+ASM2
grid     16633     1  0 May18 ?        00:00:11 asm_lck0_+ASM2
grid     16662     1  0 May18 ?        00:07:10 asm_gcr0_+ASM2
grid     16699     1  0 May19 ?        00:00:00 oracle+ASM2 (LOCAL=NO)
grid     16746     1  0 May18 ?        00:00:06 asm_asmb_+ASM2
grid     16748     1  0 May18 ?        00:00:13 oracle+ASM2_asmb_+asm2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid     16756     1  0 May18 ?        00:00:00 oracle+ASM2_ocr (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid     17567     1  0 May18 ?        00:00:00 oracle+ASM2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid     17622 17536  0 12:43 pts/1    00:00:00 grep --color=auto ASM
grid     27829     1  0 May18 ?        00:00:00 oracle+ASM2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
[grid@dzbl2 ~]$ asmcmd
ASMCMD> lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304   1907344  1904420                0         1904420              0             N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304   1150344  1149032                0         1149032              0             N  FRA/
MOUNTED  EXTERN  N         512             512   4096  4194304     14304    13988                0           13988              0             Y  SYSTEMDG/
ASMCMD> exit
[grid@dzbl2 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       dzbl1                    STABLE
               ONLINE  ONLINE       dzbl2                    STABLE
ora.chad
               ONLINE  ONLINE       dzbl1                    STABLE
               ONLINE  ONLINE       dzbl2                    STABLE
ora.net1.network
               ONLINE  ONLINE       dzbl1                    STABLE
               ONLINE  ONLINE       dzbl2                    STABLE
ora.ons
               ONLINE  ONLINE       dzbl1                    STABLE
               ONLINE  ONLINE       dzbl2                    STABLE
ora.proxy_advm
               OFFLINE OFFLINE      dzbl1                    STABLE
               OFFLINE OFFLINE      dzbl2                    STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       dzbl1                    STABLE
      2        ONLINE  ONLINE       dzbl2                    STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  ONLINE       dzbl2                    STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.FRA.dg(ora.asmgroup)
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  ONLINE       dzbl2                    STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       dzbl2                    STABLE
ora.SYSTEMDG.dg(ora.asmgroup)
      1        OFFLINE OFFLINE                               STABLE
      2        ONLINE  ONLINE       dzbl2                    STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  ONLINE       dzbl2                    Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       dzbl1                    STABLE
      2        ONLINE  ONLINE       dzbl2                    STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       dzbl2                    STABLE
ora.dzbl1.vip
      1        ONLINE  ONLINE       dzbl1                    STABLE
ora.dzbl2.vip
      1        ONLINE  ONLINE       dzbl2                    STABLE
ora.dzbldb.db
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  ONLINE       dzbl2                    Open,HOME=/u01/app/o
                                                             racle/product/19c/db
                                                             _1,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       dzbl2                    STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       dzbl2                    STABLE
--------------------------------------------------------------------------------
[grid@dzbl2 ~]$ 

2. 分析db和asm有一个实例无法启动原因分析

--实例启动报错
SQL>  startup
ORA-03113: end-of-file on communication channel

--无法启动节点alert日志
2021-05-19T12:41:32.143124+08:00
NOTE: ASMB (index:0) registering with ASM instance as Flex client 0xffffffffffffffff (reg:2449521867) (startid:1072960888) (new connection)
2021-05-19T12:41:32.349766+08:00
My CSS node number is 1
My CSS hostname is dzbl1
lmon registered with NM - instance number 1 (internal mem no 0)
2021-05-19T12:41:34.054865+08:00
Using default pga_aggregate_limit of 16384 MB
2021-05-19T12:42:16.978085+08:00
No connectivity to other instances in the cluster during startup. Hence, LMON is terminating the instance. Please check the LMON trace file for details.
 Also, please check the network logs of this instance along with clusterwide network health for problems and then re-start this instance.
LMON (ospid: ): terminating the instance due to ORA error
Cause - 'Instance is being terminated by LMON'
2021-05-19T12:42:17.115807+08:00
System state dump requested by (instance=1, osid=29660 (LMON)), summary=[abnormal instance termination]. error - 'Instance is terminating.
System State dumped to trace file /u01/app/oracle/diag/rdbms/dzbldb/dzbldb1/trace/dzbldb1_diag_29641.trc
2021-05-19T12:42:17.227469+08:00
Dumping diagnostic data in directory=[cdmp_20210519124217], requested by (instance=1, osid=29660 (LMON)), summary=[abnormal instance termination].
2021-05-19T12:42:18.344481+08:00
Instance terminated by LMON, pid = 29660

--正常节点lmon日志
*** 2021-05-19T12:42:29.348455+08:00
IPCLW:[0.16]{-}[CNCT]:PROTO: [1621399349248289]Warning! ACNH://0x7f3d993a7990/peer=[UNKNWN]&ospid=0&msn=993097808&seq=995707504
  (169.254.14.18:32056) has outstanding sends during delete.
IPCLW:[0.17]{-}[CNCT]:UTIL: [1621399349248289]  ACNH 0x7f3d993a7990 State: 2 SMSN: 993097806 PKT(993097808.995707504) # Pending: 2
IPCLW:[0.18]{-}[CNCT]:UTIL: [1621399349248289]   Peer: [UNKNWN].0 AckSeq: 0
IPCLW:[0.19]{-}[CNCT]:UTIL: [1621399349248289]   Flags: 0x40000000 IHint: 0x30693d920000001f THint: 0x0
IPCLW:[0.20]{-}[CNCT]:UTIL: [1621399349248289]   Local Address: 169.254.17.231:19443 Remote Address: 169.254.14.18:32056
IPCLW:[0.21]{-}[CNCT]:UTIL: [1621399349248289]   Remote PID: ver 0 flags 1 trans 2 tos 0 opts 0 xdata3 165f xdata2 70dbd629
IPCLW:[0.22]{-}[CNCT]:UTIL: [1621399349248289]             : mmsz 32768 mmr 4096 mms 4096 xdata c2a71bf9
IPCLW:[0.23]{-}[CNCT]:UTIL: [1621399349248289]   IVPort: 46944 TVPort: 7161 IMPT: 25433 RMPT: 5727   Pending Sends: Yes Unacked Sends: Yes
IPCLW:[0.24]{-}[CNCT]:UTIL: [1621399349248289]   Send Engine Queued: No sshdl -1 ssts 0 rtts 0 snderrchk 0 creqcnt 19 credits 0/0
IPCLW:[0.25]{-}[CNCT]:UTIL: [1621399349248289]   Unackd Messages 993097806 -> 993097807. SSEQ 995707502 Send Time: 
                                                  INVALID TIME SMSN # Xmits: 0 EMSN INVALID TIME
IPCLW:[0.26]{-}[CNCT]:UTIL: [1621399349248289]  Pending send queue:
IPCLW:[0.27]{-}[CNCT]:UTIL: [1621399349248289]    [0] mbuf 0x7f3d99397770 MSN 993097806 Seq 995707502 -> 995707503 # XMits: 0
IPCLW:[0.28]{-}[CNCT]:UTIL: [1621399349248289]    [1] mbuf 0x7f3d99397350 MSN 993097807 Seq 995707503 -> 995707504 # XMits: 0
kjxgfipccb: msg 0x7f3d9934a680, mbo 0x7f3d9934a670, type 24, ack 0, ref 0, stat 34
kjxgfipccb: msg 0x7f3d9934a878, mbo 0x7f3d9934a868, type 18, ack 0, ref 0, stat 34

从日志看异常节点的169.254.14.18:32056和169.254.17.231:19443无法使用udp进行通讯,参考:Only One Instance of a RAC Database Can Start at a Time: Second Instance Fails to Start due to “No reconfig messages from other instances” – LMON is terminating the instance (Doc ID 2528588.1),从而使得asm和db实例只能启动一个节点.到目前为止,初步看很可能是公有云的对于169.254网段的某些限制导致.
对于两个节点asm磁盘组mount,crs正常启动.这个是由于使用的是fiex asm技术实现(在asm实例启动正常情况下直接启动本地asm实例,在本地asm实例无法正常启动,通过fiex asm实现磁盘组正常mount)

记录在linux 7中安装11.2.0.4 RAC遇到问题

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:记录在linux 7中安装11.2.0.4 RAC遇到问题

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

还有客户要求在linux 7平台中安装11.2.0.4的RAC,在这次的安装过程中,参考了Installation walk-through – Oracle Grid/RAC 11.2.0.4 on Oracle Linux 7 (Doc ID 1951613.1)文档,在安装grid的时候,执行root.sh执行,打上了patch 18370031,root.sh脚本非常顺利执行成功
打patch 18370031操作
一定要在gi安装执行root.sh之前打该patch

[grid@ptdb1 soft]$ $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /tmp/soft/18370031
Oracle Interim Patch Installer version 11.2.0.3.27
Copyright (c) 2021, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/11.2.0/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/11.2.0/grid/oraInst.loc
OPatch version    : 11.2.0.3.27
OUI version       : 11.2.0.4.0
Log file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch2021-05-19_20-06-19PM_1.log

Verifying environment and performing prerequisite checks...

--------------------------------------------------------------------------------
Start OOP by Prereq process.
Launch OOP...

Oracle Interim Patch Installer version 11.2.0.3.27
Copyright (c) 2021, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/11.2.0/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/11.2.0/grid/oraInst.loc
OPatch version    : 11.2.0.3.27
OUI version       : 11.2.0.4.0
Log file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch2021-05-19_20-06-40PM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   18370031  

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/11.2.0/grid')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '18370031' to OH '/u01/app/11.2.0/grid'

Patching component oracle.crs, 11.2.0.4.0...
Patch 18370031 successfully applied.
Log file location: /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch2021-05-19_20-06-40PM_1.log

OPatch succeeded.

两个节点执行root.sh操作

[root@ptdb1 rpm]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert

Adding Clusterware entries to oracle-ohasd.service
CRS-2672: Attempting to start 'ora.mdnsd' on 'ptdb1'
CRS-2676: Start of 'ora.mdnsd' on 'ptdb1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'ptdb1'
CRS-2676: Start of 'ora.gpnpd' on 'ptdb1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'ptdb1'
CRS-2672: Attempting to start 'ora.gipcd' on 'ptdb1'
CRS-2676: Start of 'ora.cssdmonitor' on 'ptdb1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'ptdb1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'ptdb1'
CRS-2672: Attempting to start 'ora.diskmon' on 'ptdb1'
CRS-2676: Start of 'ora.diskmon' on 'ptdb1' succeeded
CRS-2676: Start of 'ora.cssd' on 'ptdb1' succeeded

ASM created and started successfully.

Disk Group SYSDG created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 2ad9ee4ec49f4fe2bf80f0c7006bd395.
Successfully replaced voting disk group with +SYSDG.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   2ad9ee4ec49f4fe2bf80f0c7006bd395 (/dev/sdb) [SYSDG]
Located 1 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'ptdb1'
CRS-2676: Start of 'ora.asm' on 'ptdb1' succeeded
CRS-2672: Attempting to start 'ora.SYSDG.dg' on 'ptdb1'
CRS-2676: Start of 'ora.SYSDG.dg' on 'ptdb1' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded


--节点2
[root@ptdb2 ~]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to oracle-ohasd.service
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node ptdb1,
     number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

不安装该patch的,手动在systemd中添加ohasd服务
在root.sh执行过程中,待/etc/init.d/目录下生成了init.ohasd 文件后执行systemctl start ohasd.service 启动ohasd服务即可。若没有/etc/init.d/init.ohasd文件 systemctl start ohasd.service 则会启动失败,如果没有及时启动ohasd服务,导致root.sh执行失败,可以在启动ohasd服务之后,重新执行root.sh

(1).创建一个空服务文件:/usr/lib/systemd/system/ohasd.service
touch /usr/lib/systemd/system/ohasd.service

(2).编辑文件ohasd.service添加如下内容
vi /usr/lib/systemd/system/ohasd.service
[Unit]
Description=Oracle High Availability Services
After=syslog.target
[Service]
ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple
Restart=always
[Install]
WantedBy=multi-user.target

(3).添加和启动服务
systemctl daemon-reload
systemctl enable ohasd.service
systemctl start ohasd.service

安装db软件时无法发现节点
20210519203812


通过参考mos文档 Database runInstaller “Nodes Selection” Window Does Not Show Cluster Nodes (Doc ID 1327486.1),应该是在安装过程中inventory信息异常导致,采用一下方法解决

[grid@ptdb1 bin]$ pwd
/u01/app/11.2.0/grid/oui/bin
[grid@ptdb1 bin]$ ./runInstaller -silent -ignoreSysPrereqs -updateNodeList 
>ORACLE_HOME=$ORACLE_HOME LOCAL_NODE="ptdb1" CLUSTER_NODES="{ptdb1,ptdb2}" CRS=true
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 8062 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

INS-35423错误解决
20210519204306


通过查询mos发现Window: Prerequisite Error INS-35423 And Warnings When Installing RAC Database Software (Doc ID 2164220.1)文章,虽然该文章描述的是windows环境,根据经验,我这个情况类似,按照该方法处理后正常

./runInstaller -jreLoc /etc/alternatives/jre_1.8.0 ORACLE_HOSTNAME="ptdb1"

安装db遇到Error in invoking target ‘agent nmhs’ of makefile ‘
这个是一个特别常见的错误,如果不使用dbcontrol,该错误可以直接忽略,或者采用类似方法处理
对$ORACLE_HOME/sysman/lib/ins_emagent.mk文件中的$(MK_EMAGENT_NMECTL)修改为$(MK_EMAGENT_NMECTL) -lnnz11,然后重试
20210519210048


如果要使用db console,如果在数据库安装完成之后打上patch 19692824
参考mos文档Installation of Oracle 11.2.0.4 Database Software on OL7 fails with ‘Error in invoking target ‘agent nmhs’ of makefile ‘ & “undefined reference to symbol ‘B_DestroyKeyObject’” error (Doc ID 1965691.1)

win 11.2.0.4打patch后服务无法正常启动处理

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:win 11.2.0.4打patch后服务无法正常启动处理

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

最近一直困惑在win 11.2.0.4中打扩展patch的之后,oracle相关无法无法正常启动的问题,今天在朋友的指导下,终于搞定,这里具体描述下这次打patch的相关事宜.
0. win版本信息
20210508121352
1. 关闭oracle相关服务之后,打patch 提示CheckActiveFilesAndExecutables
20210508115852


对于这个问题,可以通过tasklist /M ora*命令找出相关进程,然后杀掉即可
20210508144815

2. 打上psu patch之后,相关服务无法启动
20210508143417
20210508143434

对于这个问题,是由于打上patch之后由于oracle程序调用的需要的MFC动态库不对,需要升级到Microsoft Visual C++ 2005 Service Pack 1 Redistributable Package MFC(https://www.microsoft.com/zh-cn/download/confirmation.aspx?id=26347)
20210508121006

启动相关服务正常
20210508144417

安装相关patch成功
20210508121147

19c 打RU patch遇到oui-patch.xml (Permission denied)问题

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:19c 打RU patch遇到oui-patch.xml (Permission denied)问题

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

有一段时间没有做实施的活,今天安装一套19c rac,并且计划安装Patch 32545008 – GI Release Update 19.11.0.0.210420,在安装过程中遇到oui-patch.xml问题,记录下来供参考:
1. OPATCHAUTO-72088

[root@jbsbdb1 ~]# opatchauto apply /u01/soft/32545008

OPatchauto session is initiated at Wed Apr 28 14:20:24 2021

System initialization log file is /u01/app/19.0/grid/cfgtoollogs/opatchautodb/systemconfig2021-04-28_02-20-29PM.log.

Session log file is /u01/app/19.0/grid/cfgtoollogs/opatchauto/opatchauto2021-04-28_02-20-57PM.log
The id for this session is N2PG

Wrong OPatch software installed in following homes:
Host:jbsbdb2, Home:/u01/app/oracle/product/19.0/db_1

Host:jbsbdb2, Home:/u01/app/19.0/grid

OPATCHAUTO-72088: OPatch version check failed.
OPATCHAUTO-72088: OPatch software version in homes selected for patching are different.
OPATCHAUTO-72088: Please install same OPatch software in all homes.
OPatchAuto failed.

OPatchauto session completed at Wed Apr 28 14:21:15 2021
Time taken to complete the session 0 minute, 51 seconds

 opatchauto failed with error code 42

故障原因,只是升级了节点1的grid和oracle的opatch,把节点2的opatch也升级之后该问题解决

2. oui-patch.xml (Permission denied)文件处理

[root@jbsbdb2 soft]# opatchauto apply /u01/soft/32545008

OPatchauto session is initiated at Wed Apr 28 14:52:29 2021

System initialization log file is /u01/app/19.0/grid/cfgtoollogs/opatchautodb/systemconfig2021-04-28_02-52-32PM.log.

Session log file is /u01/app/19.0/grid/cfgtoollogs/opatchauto/opatchauto2021-04-28_02-52-58PM.log
The id for this session is T6ST

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19.0/db_1
Patch applicability verified successfully on home /u01/app/19.0/grid

Patch applicability verified successfully on home /u01/app/oracle/product/19.0/db_1


Executing patch validation checks on home /u01/app/19.0/grid
Patch validation checks successfully completed on home /u01/app/19.0/grid


Executing patch validation checks on home /u01/app/oracle/product/19.0/db_1
Patch validation checks successfully completed on home /u01/app/oracle/product/19.0/db_1


Verifying SQL patch applicability on home /u01/app/oracle/product/19.0/db_1
SQL patch applicability verified successfully on home /u01/app/oracle/product/19.0/db_1


Preparing to bring down database service on home /u01/app/oracle/product/19.0/db_1
Successfully prepared home /u01/app/oracle/product/19.0/db_1 to bring down database service


Performing prepatch operations on CRS - bringing down CRS service on home /u01/app/19.0/grid
Prepatch operation log file location: 
/u01/app/grid/crsdata/jbsbdb2/crsconfig/crs_prepatch_apply_inplace_jbsbdb2_2021-04-28_02-54-34PM.log
CRS service brought down successfully on home /u01/app/19.0/grid


Performing prepatch operation on home /u01/app/oracle/product/19.0/db_1
Perpatch operation completed successfully on home /u01/app/oracle/product/19.0/db_1


Start applying binary patch on home /u01/app/oracle/product/19.0/db_1
Failed while applying binary patches on home /u01/app/oracle/product/19.0/db_1

Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : jbsbdb2->/u01/app/oracle/product/19.0/db_1 Type[rac]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/oracle/product/19.0/db_1, host: jbsbdb2.
Command failed:  /u01/app/oracle/product/19.0/db_1/OPatch/opatchauto 
 apply /u01/soft/32545008 -oh /u01/app/oracle/product/19.0/db_1 -target_type rac_database
 -binary -invPtrLoc /u01/app/19.0/grid/oraInst.loc -jre /u01/app/19.0/grid/OPatch/jre -persistresult
/u01/app/oracle/product/19.0/db_1/opatchautocfg/db/sessioninfo/sessionresult_jbsbdb2_rac_2.ser -analyzedresult
 /u01/app/oracle/product/19.0/db_1/opatchautocfg/db/sessioninfo/sessionresult_analyze_jbsbdb2_rac_2.ser
Command failure output: 
==Following patches FAILED in apply:

Patch: /u01/soft/32545008/32545013
Log: /u01/app/oracle/product/19.0/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2021-04-28_15-00-49PM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: 
ApplySession failed in system modification phase... 
'ApplySession::apply failed: java.io.IOException: oracle.sysman.oui.patch.PatchException: 
java.io.FileNotFoundException: /u01/app/oraInventory/ContentsXML/oui-patch.xml (Permission denied)' 

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Wed Apr 28 15:05:29 2021
Time taken to complete the session 13 minutes, 0 second

 opatchauto failed with error code 42

在节点1打patch成功之后,对节点2进行打patch,遇到类似:ApplySession::apply failed: java.io.IOException: oracle.sysman.oui.patch.PatchException:
java.io.FileNotFoundException: /u01/app/oraInventory/ContentsXML/oui-patch.xml (Permission denied)问题,通过查询mos发现类似文章opatchauto apply Results java.io.FileNotFoundException: /ContentsXML/oui-patch.xml (Permission denied) Error in Non-OUI Nodes (Doc ID 2582139.1),确认是bug 29859410 在20.1版本中修复.检查系统中对应oui-patch.xml信息.

[root@jbsbdb2 soft]# ls -l /u01/app/oraInventory/ContentsXML/oui-patch.xml
-rw-r----- 1 grid oinstall 174 Apr 28 14:04 /u01/app/oraInventory/ContentsXML/oui-patch.xml

确实文件权限不对,对其授权处理

[root@jbsbdb2 soft]# chmod 660 /u01/app/oraInventory/ContentsXML/oui-patch.xml
[root@jbsbdb2 soft]# ls -l /u01/app/oraInventory/ContentsXML/oui-patch.xml
-rw-rw---- 1 grid oinstall 174 Apr 28 14:04 /u01/app/oraInventory/ContentsXML/oui-patch.xml

处理oui-patch.xml文件异常之后,尝试回滚操作

[root@jbsbdb2 soft]# opatchauto rollback /u01/soft/32545008

OPatchauto session is initiated at Wed Apr 28 15:13:58 2021

System initialization log file is /u01/app/19.0/grid/cfgtoollogs/opatchautodb/systemconfig2021-04-28_03-14-00PM.log.

Session log file is /u01/app/19.0/grid/cfgtoollogs/opatchauto/opatchauto2021-04-28_03-14-28PM.log
The id for this session is KFYI

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19.0/db_1

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.0/grid
Patch applicability verified successfully on home /u01/app/19.0/grid

Patch applicability verification failed on home /u01/app/oracle/product/19.0/db_1

Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : jbsbdb2->/u01/app/oracle/product/19.0/db_1 Type[rac]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/oracle/product/19.0/db_1, host: jbsbdb2.
Command failed:  /u01/app/oracle/product/19.0/db_1/OPatch/opatchauto  rollback
 /u01/soft/32545008 -oh /u01/app/oracle/product/19.0/db_1 
-target_type rac_database -binary -invPtrLoc /u01/app/19.0/grid/oraInst.loc 
-jre /u01/app/19.0/grid/OPatch/jre -persistresult 
/u01/app/oracle/product/19.0/db_1/opatchautocfg/db/sessioninfo/sessionresult_analyze_jbsbdb2_rac_2.ser
 -analyze -online -prepare_home
Command failure output: 
==Following patches FAILED in analysis for rollback:

Patch: /u01/soft/32545008/32579761
Log: 
Reason: Failed during listing in Analysis: java.lang.Exception: oracle.opatch.opatchsdk.OPatchException: 
Unable to create patchObject
Possible causes are:
   ORACLE_HOME/inventory/oneoffs/32545013 is corrupted. PatchObject constructor: 
Input file "/u01/app/oracle/product/19.0/db_1/inventory/oneoffs/32545013/etc/config/actions" 
or "/u01/app/oracle/product/19.0/db_1/inventory/oneoffs/32545013/etc/config/inventory" does not exist.


Patch: /u01/soft/32545008/32545013
Log: 
Reason: Failed during listing in Analysis: java.lang.Exception: oracle.opatch.opatchsdk.OPatchException: 
Unable to create patchObject
Possible causes are:
   ORACLE_HOME/inventory/oneoffs/32545013 is corrupted. PatchObject constructor:
 Input file "/u01/app/oracle/product/19.0/db_1/inventory/oneoffs/32545013/etc/config/actions" 
or "/u01/app/oracle/product/19.0/db_1/inventory/oneoffs/32545013/etc/config/inventory" does not exist. 

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Wed Apr 28 15:14:50 2021
Time taken to complete the session 0 minute, 53 seconds

 opatchauto failed with error code 42

检查发现/u01/app/oracle/product/19.0/db_1/inventory/oneoffs/32545013确实不存在,从节点1把该文件夹tar过来,然后再次回滚

[root@jbsbdb2 ~]# /u01/app/oracle/product/19.0/db_1/OPatch/opatchauto rollback /u01/soft/32545008 -oh 
>/u01/app/oracle/product/19.0/db_1

OPatchauto session is initiated at Wed Apr 28 16:12:33 2021

System initialization log file is 
/u01/app/oracle/product/19.0/db_1/cfgtoollogs/opatchautodb/systemconfig2021-04-28_04-12-36PM.log.

Session log file is /u01/app/oracle/product/19.0/db_1/cfgtoollogs/opatchauto/opatchauto2021-04-28_04-12-53PM.log
The id for this session is JRS3

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19.0/db_1
Patch applicability verified successfully on home /u01/app/oracle/product/19.0/db_1


Executing patch validation checks on home /u01/app/oracle/product/19.0/db_1
Patch validation checks successfully completed on home /u01/app/oracle/product/19.0/db_1


Verifying SQL patch applicability on home /u01/app/oracle/product/19.0/db_1
SQL patch applicability verified successfully on home /u01/app/oracle/product/19.0/db_1


Preparing to bring down database service on home /u01/app/oracle/product/19.0/db_1
Successfully prepared home /u01/app/oracle/product/19.0/db_1 to bring down database service


Bringing down database service on home /u01/app/oracle/product/19.0/db_1
Following database(s) and/or service(s) are stopped and will be restarted later during the session: racdb
Database service successfully brought down on home /u01/app/oracle/product/19.0/db_1


Performing prepatch operation on home /u01/app/oracle/product/19.0/db_1
Perpatch operation completed successfully on home /u01/app/oracle/product/19.0/db_1


Start rolling back binary patch on home /u01/app/oracle/product/19.0/db_1
Binary patch rolled back successfully on home /u01/app/oracle/product/19.0/db_1


Performing postpatch operation on home /u01/app/oracle/product/19.0/db_1
Postpatch operation completed successfully on home /u01/app/oracle/product/19.0/db_1


Starting database service on home /u01/app/oracle/product/19.0/db_1
Database service successfully started on home /u01/app/oracle/product/19.0/db_1


Preparing home /u01/app/oracle/product/19.0/db_1 after database service restarted
No step execution required.........
 

Trying to roll back SQL patch on home /u01/app/oracle/product/19.0/db_1
SQL patch rolled back successfully on home /u01/app/oracle/product/19.0/db_1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:jbsbdb2
RAC Home:/u01/app/oracle/product/19.0/db_1
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /u01/soft/32545008/32576499
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/soft/32545008/32585572
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/soft/32545008/32584670
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/soft/32545008/32579761
Reason: This Patch does not exist in the home, it cannot be rolled back.


==Following patches were SUCCESSFULLY rolled back:

Patch: /u01/soft/32545008/32545013
Log: /u01/app/oracle/product/19.0/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2021-04-28_16-15-05PM_1.log

回滚成功,直接使用opatchauto apply /u01/soft/32545008打节点2成功
3. RU安装成功结果

[grid@jbsbdb2 ~]$ opatch lspatches
32585572;DBWLM RELEASE UPDATE 19.0.0.0.0 (32585572)
32584670;TOMCAT RELEASE UPDATE 19.0.0.0.0 (32584670)
32579761;OCW RELEASE UPDATE 19.11.0.0.0 (32579761)
32576499;ACFS RELEASE UPDATE 19.11.0.0.0 (32576499)
32545013;Database Release Update : 19.11.0.0.210420 (32545013)

OPatch succeeded.

[oracle@jbsbdb2 ~]$ opatch lspatches
32579761;OCW RELEASE UPDATE 19.11.0.0.0 (32579761)
32545013;Database Release Update : 19.11.0.0.210420 (32545013)

OPatch succeeded.
[grid@jbsbdb2 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node jbsbdb2 is [3331580692].
[grid@jbsbdb2 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. 
The cluster upgrade state is [NORMAL]. The cluster active patch level is [3331580692].

SQL> select PATCH_ID,DESCRIPTION from dba_registry_sqlpatch;

  PATCH_ID
----------
DESCRIPTION
--------------------------------------------------------------------------------
  29517242
Database Release Update : 19.3.0.0.190416 (29517242)

  32545013
Database Release Update : 19.11.0.0.210420 (32545013)

在打patch之前已经知晓该问题,比较粗心的把oui-patch.xml文件从节点1scp拷贝到节点2,没有确认oui-patch.xml权限(拷贝过来是640)从而引起后续很多麻烦