修改dataguard主库redo组数和大小

在一个dg环境中,配置的是实时同步,需要增加主库的redo大小和组数,本来是一个很简单的问题,解决思路是:先备库增加standby redo删除老standby redo,然后主库增加redo删除老redo,备库增加新redo删除老redo,最后主库增加standby redo。但是在实施过程中,遇到了一些细节性的问题,主要是学习到了log_file_name_convert如果不配置,将导致备库redo 文件不能被删除

standby redo log管理
增加standby redo log

SQL> alter database add standby logfile group 8 ('/data/oradata/wasudb/st_redo08.log') size 200M; 
alter database add standby logfile group 8 ('/data/oradata/wasudb/st_redo08.log') size 200M
*
ERROR at line 1:
ORA-01156: recovery in progress may need access to files

SQL> alter database recover managed standby database cancel;

Database altered.

SQL> alter database add standby logfile group 9 ('/data/oradata/wasudb/st_redo09.log') size 200M; 

Database altered.

SQL> alter database add standby logfile group 10 ('/data/oradata/wasudb/st_redo10.log') size 200M; 

Database altered.

SQL> alter database add standby logfile group 11 ('/data/oradata/wasudb/st_redo11.log') size 200M; 

Database altered.

SQL> alter database add standby logfile group 12 ('/data/oradata/wasudb/st_redo12.log') size 200M; 

Database altered.

删除standby redo log

SQL> alter database drop logfile group 4;
alter database drop logfile group 4
*
ERROR at line 1:
ORA-00261: log 4 of thread 1 is being archived or modified
ORA-00312: online log 4 thread 1: '/data/oradata/wasudb/st_redo04.log'


SQL> alter database drop logfile group 5;;

Database altered.

SQL> alter database drop logfile group 6;

Database altered.

SQL> alter database drop logfile group 7;

Database altered.

--在主库多次执行switch logfile

SQL>  alter database drop logfile group 4;

Database altered.

主库redo log 管理
增加redo log

SQL> alter database add logfile group 21 ('/data/oradata/wasudb/redo21.log') size 200m;

Database altered.

SQL> alter database add logfile group 22 ('/data/oradata/wasudb/redo22.log') size 200m; 

Database altered.

SQL> alter database add logfile group 23 ('/data/oradata/wasudb/redo23.log') size 200m; 

Database altered.

SQL> alter database add logfile group 24 ('/data/oradata/wasudb/redo24.log') size 200m;

Database altered.

删除redo log

SQL> alter database drop logfile group 1;

Database altered.

SQL> alter database drop logfile group 2;

Database altered.

SQL> alter database drop logfile group 3;

Database altered.
--这里涉及到多次switch logfile,需要确定redo是inactive才能够删除

备库redo log管理
增加redo log

SQL> alter database add logfile group 21 ('/data/oradata/wasudb/redo21.log') size 200m; 
alter database add logfile group 21 ('/data/oradata/wasudb/redo21.log') size 200m
*
ERROR at line 1:
ORA-01275: Operation ADD LOGFILE is not allowed if standby file management is automatic.


SQL>  alter system set standby_file_management=manual;

System altered.

SQL> alter database add logfile group 21 ('/data/oradata/wasudb/redo21.log') size 200m;

Database altered.

SQL> alter database add logfile group 22 ('/data/oradata/wasudb/redo22.log') size 200m; 

Database altered.

SQL> alter database add logfile group 23 ('/data/oradata/wasudb/redo23.log') size 200m; 

Database altered.

SQL> alter database add logfile group 24 ('/data/oradata/wasudb/redo24.log') size 200m;

Database altered.

删除redo log

SQL> alter database drop logfile group 1;
alter database drop logfile group 1
*
ERROR at line 1:
ORA-01624: log 1 needed for crash recovery of instance wasudb (thread 1)
ORA-00312: online log 1 thread 1: '/data/oradata/wasudb/redo01.log'

SQL>  ALTER DATABASE CLEAR LOGFILE GROUP 1;
 ALTER DATABASE CLEAR LOGFILE GROUP 1
*
ERROR at line 1:
ORA-19527: physical standby redo log must be renamed
ORA-00312: online log 1 thread 1: '/data/oradata/wasudb/redo01.log'

SQL> show parameter NAME_CONVERT

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert                 string
log_file_name_convert                string

SQL> alter system set log_file_name_convert='/data/oradata/wasudb','/data/oradata/wasudb' scope=spfile;

System altered.

SQL>  alter system set db_file_name_convert='/data/oradata/wasudb','/data/oradata/wasudb' scope=spfile;

System altered.

--重启数据库

SQL> show parameter file_name_convert;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert                 string      /data/oradata/wasudb, /data/or
                                                 adata/wasudb
log_file_name_convert                string      /data/oradata/wasudb, /data/or
                                                 adata/wasudb

SQL> alter database drop logfile group 1;
alter database drop logfile group 1
*
ERROR at line 1:
ORA-01624: log 1 needed for crash recovery of instance wasudb (thread 1)
ORA-00312: online log 1 thread 1: '/data/oradata/wasudb/redo01.log'

SQL> ALTER DATABASE CLEAR LOGFILE GROUP 1;

Database altered.

SQL> alter database drop logfile group 1;

Database altered.

SQL> ALTER DATABASE CLEAR LOGFILE GROUP 2;

Database altered.

SQL> alter database drop logfile group 2;

Database altered.

SQL> ALTER DATABASE CLEAR LOGFILE GROUP 3;

Database altered.

SQL> alter database drop logfile group 3;

Database altered.

主库standby redo log管理
增加standby redo

SQL> alter database add logfile group 21 ('/data/oradata/wasudb/redo21.log') size 200m;

Database altered.

SQL> alter database add logfile group 22 ('/data/oradata/wasudb/redo22.log') size 200m; 

Database altered.

SQL> alter database add logfile group 23 ('/data/oradata/wasudb/redo23.log') size 200m; 

Database altered.

SQL> alter database add logfile group 24 ('/data/oradata/wasudb/redo24.log') size 200m;

Database altered.

删除standby redo

SQL> alter database drop logfile group 4;

Database altered.

SQL> alter database drop logfile group 5;

Database altered.

SQL> alter database drop logfile group 6;

Database altered.

SQL> alter database drop logfile group 7;

Database altered.

后续工作

SQL> alter system set standby_file_management=auto;

System altered.

SQL> alter database recover managed standby database using current logfile disconnect from session;

Database altered.

至此修改dataguard环境中的redo大小和增加redo组数的操作基本完成.在这里主要修正了自己以前对log_file_name_convert的认识,我以前以为如果我的主备库redo对应的目录一致不用配置该参数,今天通过查询MOS发现从10.2开始数据库为了能实现dg的快速切换在mrp启动的时候会去尝试清理备库redo,如果备库没有redo,或者log_file_name_convert配置不正确导致不能正常执行这个清理工作,数据库就会报ORA-19527,特别是在mrp进程启动之时.对于本次出现执行CLEAR LOGFILE命令也出现该问题,确实有点过犹不及了.这里也就是提醒我们:就算redo file,datafile主备位置相同,也建议配置log_file_name_convert和db_file_name_convert参数,提高dg健壮性.

EXADATA与非EXADATA搭建DATAGURAD关于EHCC特性测试

随着xd的越来越普及,不少的企业使用了xd,但是不少企业因为资金有限,只有一台xd,但是为了实现数据的容灾,可能会使用一台非xd的机器来通过dataguard来实现容灾,但是因为xd的ehcc新特性,官方宣传是只在xd中支持,如果dg的备库不是xd。那么会怎么样,这里通过测试得出如下一些结论:xd与非xd可以构造dg,ehcc功能在xd上无法高效使用。对于这样的环境条件下,使用ORACLE自带压缩效率更高.针对ehcc压缩效率很低,个人猜测,是因为xd检查到备库是非xd环境,直接对ehcc进行了降级压缩处理,从而出现了ehcc的压缩效率比oltp还低(牺牲了xd的性能,确保了数据的安全,看来xd的设计还是考虑的比较全面)
xd基本信息

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production

SQL> show parameter clu;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cluster_database                     boolean     TRUE
cluster_database_instances           integer     2
cluster_interconnects                string

SQL> SELECT OPEN_MODE, DATABASE_ROLE ,NAME FROM V$DATABASE;

OPEN_MODE            DATABASE_ROLE    NAME
-------------------- ---------------- ---------
READ WRITE           PRIMARY          xxxxxx

SQL> !uname -a
Linux dm01db02 2.6.18-194.3.1.0.4.el5 #1 SMP Sat Feb 19 03:38:37 EST 2011 x86_64 x86_64 x86_64 GNU/Linux

xd创建模拟表

SQL> create table t_FF_c compress as select  * from dba_objects;

Table created.

SQL> create table t_FF_c_o compress for oltp  as select  * from dba_objects;

Table created.

SQL> create table t_FF_q_l compress for query low as select  * from dba_objects;

Table created.

SQL> create table t_FF_q_h compress for query high  as select  * from dba_objects;

Table created.

SQL> create table t_FF_a_l compress for archive low  as select  * from dba_objects;

Table created.

SQL> create table t_FF_a_h compress for archive high as select * from dba_objects;

Table created.

SQL> create table t_ff as select * from dba_objects;

Table created.

xd查询模拟表

SQL> select s.owner,segment_name,s.bytes/1024/1024 t_size,compress_for 
from dba_segments s,dba_tables t
where s.owner=t.owner and t.table_name=s.segment_name  and t.table_name like 'T_FF%';  2    3  

OWNER                          SEGMENT_NAME                       T_SIZE COMPRESS_FOR
------------------------------ ------------------------------ ---------- ------------
SYS                            T_FF                                   11
SYS                            T_FF_A_H                               10 ARCHIVE HIGH
SYS                            T_FF_A_L                               10 ARCHIVE LOW
SYS                            T_FF_Q_H                               10 QUERY HIGH
SYS                            T_FF_Q_L                               10 QUERY LOW
SYS                            T_FF_C_O                                4 OLTP
SYS                            T_FF_C                                  4 BASIC

通过这里发现,带有非dg的xd使用ehcc压缩效率都低了很多

非xd备库基本信息

SQL> SELECT OPEN_MODE, DATABASE_ROLE ,NAME FROM V$DATABASE;

OPEN_MODE            DATABASE_ROLE    NAME
-------------------- ---------------- ---------
READ ONLY WITH APPLY PHYSICAL STANDBY xxxxxx

SQL> show parameter clu;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cluster_database                     boolean     FALSE
cluster_database_instances           integer     1
cluster_interconnects                string

SQL> !uname -a
Linux oradg 2.6.18-238.el5xen #1 SMP Sun Dec 19 14:42:02 EST 2010 x86_64 x86_64 x86_64 GNU/Linux

查询非xd dg备库

SQL> select s.owner,segment_name,s.bytes/1024/1024 t_size,compress_for 
from dba_segments s,dba_tables t
where s.owner=t.owner and t.table_name=s.segment_name  and t.table_name like 'T_FF%'; 

OWNER                          SEGMENT_NAME                       T_SIZE COMPRESS_FOR
------------------------------ ------------------------------ ---------- ------------
SYS                            T_FF                                   11
SYS                            T_FF_A_H                               10 ARCHIVE HIGH
SYS                            T_FF_A_L                               10 ARCHIVE LOW
SYS                            T_FF_Q_H                               10 QUERY HIGH
SYS                            T_FF_Q_L                               10 QUERY LOW
SYS                            T_FF_C_O                                4 OLTP
SYS                            T_FF_C                                  4 BASIC

SQL> SELECT COUNT(*) FROM T_FF_Q_L;

  COUNT(*)
----------
     94709

SQL> SELECT COUNT(*) FROM T_FF_Q_H;

  COUNT(*)
----------
     94710

SQL> SELECT COUNT(*) FROM T_FF_C_O;

  COUNT(*)
----------
     94708

SQL> SELECT COUNT(*) FROM T_FF_C;

  COUNT(*)
----------
     94707

SQL> SELECT COUNT(*) FROM T_FF_A_L;

  COUNT(*)
----------
     94711

SQL> SELECT COUNT(*) FROM T_FF_A_H;

  COUNT(*)
----------
     94712

SQL> select count(*) from t_FF;

  COUNT(*)
----------
    94713

通过这里测试证明,对于非xd dg库,可以正常的查询xd上的ehcc相关表,而且相关大小也相同(物理dg当然相同了)

测试xd与非xd dg测试ehcc的dml操作

--xd 主库
SQL> update t_ff_a_h set owner='www.orasos.com';

94712 rows updated.

SQL> commit;

Commit complete.

SQL>  select s.owner,segment_name,s.bytes/1024/1024 t_size,compress_for 
from dba_segments s,dba_tables t
where s.owner=t.owner and t.table_name=s.segment_name  and t.table_name like 'T_FF%';  2    3  

OWNER                          SEGMENT_NAME                       T_SIZE COMPRESS_FOR
------------------------------ ------------------------------ ---------- ------------
SYS                            T_FF                                   11
SYS                            T_FF_A_H                               11 ARCHIVE HIGH
SYS                            T_FF_A_L                               10 ARCHIVE LOW
SYS                            T_FF_Q_H                               10 QUERY HIGH
SYS                            T_FF_Q_L                               10 QUERY LOW
SYS                            T_FF_C_O                                4 OLTP
SYS                            T_FF_C                                  4 BASIC

7 rows selected.

SQL> alter system switch logfile;

System altered.

--非xd 备库
SQL> select s.owner,segment_name,s.bytes/1024/1024 t_size,compress_for 
from dba_segments s,dba_tables t
where s.owner=t.owner and t.table_name=s.segment_name  and t.table_name like 'T_FF%';  

OWNER                          SEGMENT_NAME                       T_SIZE COMPRESS_FOR
------------------------------ ------------------------------ ---------- ------------
SYS                            T_FF                                   11
SYS                            T_FF_A_H                               11 ARCHIVE HIGH
SYS                            T_FF_A_L                               10 ARCHIVE LOW
SYS                            T_FF_Q_H                               10 QUERY HIGH
SYS                            T_FF_Q_L                               10 QUERY LOW
SYS                            T_FF_C_O                                4 OLTP
SYS                            T_FF_C                                  4 BASIC

证明对于xd与非xd构成的dg环境,可以执行dml操作.

测试xd与非xd dg的ehcc的append操作

--xd准备三张测试空表
SQL> create table t_FF_a_l_1 compress for archive low as select * from t_FF  where 1=0;

Table created.

SQL> create table t_FF_a_h_1 compress for archive high  as select * from t_FF  where 1=0;

Table created.

SQL>truncate table t_FF;

Table truncated.

--插入数据(每个表执行5次)
SQL> insert /*+ APPEND */ into t_FF_a_l_1 select * from dba_objects; 

94714 rows created.

SQL> commit;

Commit complete.


SQL> insert /*+ APPEND */ into t_FF_h_l_1 select * from dba_objects; 

94714 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ APPEND */ into t_FF select * from dba_objects; 

94714 rows created.

SQL> commit;

Commit complete.

--查看相关表数据量
SQL> select count(*) from t_FF_a_l_1;

  COUNT(*)
----------
    473570

SQL> select count(*) from t_FF_a_h_1;

  COUNT(*)
----------
    473570

SQL> select count(*) from t_FF;
  COUNT(*)
----------
    473570

--查看xd主库
SQL> select s.owner,segment_name,s.bytes/1024/1024 t_size,compress_for 
from dba_segments s,dba_tables t
where s.owner=t.owner and t.table_name=s.segment_name  and t.table_name like 'T_FF%'; 

OWNER                          SEGMENT_NAME                       T_SIZE COMPRESS_FOR
------------------------------ ------------------------------ ---------- ------------
SYS                            T_FF                                   52
SYS                            T_FF_A_H                               11 ARCHIVE HIGH
SYS                            T_FF_A_L                               10 ARCHIVE LOW
SYS                            T_FF_Q_H                               10 QUERY HIGH
SYS                            T_FF_Q_L                               10 QUERY LOW
SYS                            T_FF_C_O                                4 OLTP
SYS                            T_FF_C                                  4 BASIC
SYS                            T_FF_A_H_1                             47 ARCHIVE HIGH
SYS                            T_FF_A_L_1                             47 ARCHIVE LOW

--查看非xd备库
SQL> select s.owner,segment_name,s.bytes/1024/1024 t_size,compress_for 
from dba_segments s,dba_tables t
where s.owner=t.owner and t.table_name=s.segment_name  and t.table_name like 'T_FF%'; 

OWNER                          SEGMENT_NAME                       T_SIZE COMPRESS_FOR
------------------------------ ------------------------------ ---------- ------------
SYS                            T_FF                                   52
SYS                            T_FF_A_H                               11 ARCHIVE HIGH
SYS                            T_FF_A_L                               10 ARCHIVE LOW
SYS                            T_FF_Q_H                               10 QUERY HIGH
SYS                            T_FF_Q_L                               10 QUERY LOW
SYS                            T_FF_C_O                                4 OLTP
SYS                            T_FF_C                                  4 BASIC
SYS                            T_FF_A_H_1                             47 ARCHIVE HIGH
SYS                            T_FF_A_L_1                             47 ARCHIVE LOW

试验整体结论
1.xd可以与非xd机器构建dg容灾环境(不会因为非dg不支持ehcc而导致无法搭建他们之间的dg)
2.xd与非xd的dg,ehcc功能大大缩水,基本上和非压缩状态差不多,比OLTP低很多
3.xd与非xd的dg在备库中支持select,dml,hint append等操作,这些操作是因为ehcc表在xd端就进行了ehcc降级导致

新删除data guard归档日志shell脚本

以前写过删除dataguard归档日志的方法(删除data guard归档日志),但是以前的方法确实不够灵活也不够简便,现在提供最新的一次在客户现场部署的dg删除归档日志的shell脚本

#!/bin/sh
source ~/.bash_profile

grep "Media Recovery Log" $ORACLE_BASE/admin/$ORACLE_SID/bdump/alert_${ORACLE_SID}.log|
\ awk '{print $4}'|sed -e 's/^/rm /' >/tmp/rmarchlog.sh
chmod +x /tmp/rmarchlog.sh

/tmp/rmarchlog.sh
cd $ORACLE_BASE/admin/$ORACLE_SID/bdump

cat alert_${ORACLE_SID}.log >>alert_${ORACLE_SID}.log.bak
echo ''>alert_${ORACLE_SID}.log

rm -f /tmp/rmarchlog.sh


$ORACLE_HOME/bin/rman target / <<XIFENFEI
crosscheck archivelog all;
delete expired archivelog all;
YES
exit;
XIFENFEI

根据alert日志中dg应用日志的信息”Media Recovery Log”信息来删除掉相关的归档日志,可以保证应用过的归档日志都被删除,而没有应用的归档日志都保留.

shell监控dataguard备库是否正常应用日志

一直在思索怎么去监控dg比较方便,又能够做到比较适用.想到了几种方法:
1.使用主备库两边的alert日志,但是这样的方法需要配置ssh,用来一个节点获取另外一个节点的alert日志
2.通过查询v$archived_log或者其他相关视图,然后主备库进行比较,但是这个需要访问另外一个库,需要另外库的登录信息
3.通过查询备库的v$archived_log视图,粗略评估dg是否工作正常.
这里我选择了3,dg的监控大部分时候是为了让人及时的发现日志应用异常,然后人工干预处理,从而减少修改gap或者重建dg的概率.而这个额监控可以在很大程度上发现dg应用归档日志异常,从而确定dg是否工作正常,如果发现工作异常,及时处理,可以减少很多工作量,甚至拯救你的数据.

#!/bin/bash
source ~/.bash_profile

#check time(M)
export CHECK_M=120
export RESULT_FILE=/tmp/dg_switch_check.log

$ORACLE_HOME/bin/sqlplus -silent "/ as sysdba" <<XFF>/tmp/check_dg.log
set pagesize 0 feedback off verify off heading off echo off
 select ceil((sysdate-next_time)*24*60) "M"
 from v\$archived_log 
 where applied='YES' AND SEQUENCE#=(SELECT MAX(SEQUENCE#) 
 FROM V\$ARCHIVED_LOG WHERE applied='YES');
exit;
XFF
GET_M=`cat /tmp/check_dg.log`
rm /tmp/check_dg.log

if [ ${CHECK_M} -lt ${GET_M} ]; 
then
    echo "check dataguard time:`date`">$RESULT_FILE
    echo "The last time application archivelog happened in $GET_M minutes ago">>$RESULT_FILE
else
 echo ''>$RESULT_FILE
fi

针对这样的脚本,根据你的dg归档切换的频率,设置监控dg的最近一次日志应用与当前时间差,然后判断dg是否工作正常.根据监控程序的特点,可以通过判断结果集文件,然后邮件/短信或者其他方式处理.

因v$archived_log视图记录异常导致dg MRP进程异常

版本信息

操作系统Linux 4.8 x86
数据库版本ORACLE 9.2.0.4

alert日志报错
MRP进程出现异常报ORA-00310/ORA-00334错误

…………
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_75.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_76.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_77.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_78.dbf
Media Recovery Log /u01/oracle/oradata/xifenfei/redo02.log
MRP0: Background Media Recovery terminated with error 310
Thu Nov  8 07:44:39 2012
Errors in file /u01/oracle/admin/lunar/bdump/lunar_mrp0_25625.trc:
ORA-00310: archived log contains sequence 85; sequence 79 required
ORA-00334: archived log: '/u01/oracle/oradata/xifenfei/redo02.log'
Recovery interrupted.
MRP0: Background Media Recovery process shutdown

trace文件

*** SESSION ID:(17.13) 2012-11-08 07:24:12.986
Background Managed Standby Recovery process started
*** 2012-11-08 07:24:18.023
Managed Recovery: Active posted.
*** 2012-11-08 07:41:03.171
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_64.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_65.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_66.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_67.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_68.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_69.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_70.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_71.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_72.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_73.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_74.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_75.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_76.dbf
*** 2012-11-08 07:41:39.083
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_77.dbf
*** 2012-11-08 07:44:39.171
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_78.dbf
Media Recovery Log /u01/oracle/oradata/xifenfei/redo02.log
Background Media Recovery terminated with error 310
ORA-00310: archived log contains sequence 85; sequence 79 required
ORA-00334: archived log: '/u01/oracle/oradata/xifenfei/redo02.log'
----- Redo read statistics for thread 1 -----
Read rate (ASYNC): 21990Kb in 1221.38s => 0.02 Mb/sec
Longest record: 1Kb, moves: 0/92129 (0%)
Change moves: 34869/213735 (16%), moved: 2Mb
----------------------------------------------
*** 2012-11-08 07:44:39.404
Managed Recovery: Not Active posted.
Background Media Recovery process shutdown
*** 2012-11-08 07:44:39.406

猜想数据库恢复需要sequence为79的归档日志,但是该归档日志对应的为文件为redo02.log,而该redo02的seq为85所以使得MRP进程异常

使用rman尝试恢复

RMAN> recover database ;

Starting recover at 08-NOV-12
using target database controlfile instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=8 devtype=DISK

starting media recovery

unable to find archive log
archive log thread=1 sequence=79
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 11/08/2012 08:12:48
RMAN-06054: media recovery requesting unknown log: thread 1 scn 12286829427051

测试证明rman也无法正常的恢复该异常问题

验证猜想

SQL> select name from v$archived_log where SEQUENCE#=79;

NAME
-------------------------------------------------------------------
/u01/oracle/oradata/lunar/arch/1_79.dbf
/u01/oracle/oradata/xifenfei/redo02.log
/u01/oracle/oradata/lunar/arch/1_79.dbf

SQL> select dest_id,name from v$archived_log where SEQUENCE#=79;

   DEST_ID  NAME
----------  ---------------------------------------------------
         2  /u01/oracle/oradata/lunar/arch/1_79.dbf
         1  /u01/oracle/oradata/xifenfei/redo02.log
         1  /u01/oracle/oradata/lunar/arch/1_79.dbf

SQL> select sequence#,group# from v$log;

 SEQUENCE#     GROUP#
---------- ----------
        86          1
        85          2
        87          3

SQL> select member from v$logfile where group#=2;

MEMBER
---------------------------------------------------------------
/u01/oracle/oradata/xifenfei/redo02.log

通过查询上面相关视图,证实了猜想是因为redo log被注册进入了v$archived_log导致该故障,解决该问题的思路是把redo log file从备库控制文件的v$archived_log视图中拿掉.具体方法是:
1.如果主库正常,那直接生成standby controlfile来实现
2.如果主库也是相同情况,那么先重建主库控制文件,然后重建standby controlfile来实现(该方法需要维护窗口)

解决问题思路

--查询主库,确定主库正常
SQL> select name from v$archived_log where SEQUENCE#=79;

NAME
------------------------------------------------------------
/u01/oracle/oradata/xifenfei/archive/1_79.dbf
lunar

--重新创建standby controlfile
###############################################################
注:如果主库和备库的数据文件路径不完全一致,
建议通过设置db_file_name_convert来直接实现备库数据文件路径的转换,
而不建议通过alter database rename file来实现重命名
###############################################################
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS 
   2 '/u01/oracle/oradata/lunar/control01.ctl' reuse;

Database altered.

--重新启动备库
SQL> startup nomount;
ORACLE instance started.

Total System Global Area  353441008 bytes
Fixed Size                   451824 bytes
Variable Size             184549376 bytes
Database Buffers          167772160 bytes
Redo Buffers                 667648 bytes

SQL> alter database mount standby database;

Database altered.

--开启日志应用
SQL> alter database recover managed standby database disconnect from session;

Database altered.

--alert日志
Thu Nov  8 08:28:16 2012
Completed: alter database recover managed standby database di
Thu Nov  8 08:28:20 2012
Restarting dead background process QMN0
QMN0 started with pid=13
Thu Nov  8 08:29:45 2012
Fetching gap sequence for thread 1, gap sequence 79-87
Trying FAL server: xifenfei
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_79.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_80.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_81.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_82.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_83.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_84.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_85.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_86.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_87.dbf
Media Recovery Log /u01/oracle/oradata/lunar/arch/1_88.dbf
Media Recovery Waiting for thread 1 seq# 89