使用DUL挖数据文件恢复非数据外对象方法

在dul进行数据库挖掘恢复的时候,我们可以通过unload table/user等方式来恢复表数据,但是对于一些view,index,trigger,source,seq,Dblink等不能直接通过unload来实现,但是可以通过挖基表来实现相关操作,这里提供了一些处理思路,在实际操作中根据需求,分析数据字典灵活应用
一.view
导出对象

USER$
OBJ$
COL$
VIEW$

执行sql语句

Set pages 10000
Set long 1000
Spool d:\create_view.sql
select 
  'CREATE OR REPLACE VIEW '||O.NAME||' ('||
   replace(c.cols,',',','||chr(10))||')'||CHR(10)||
  'as'||chr(10), v.text
from
user$ u, obj$ o, view$ v,
( SELECT COL.OBJ#, COL.COLS
  FROM
  (SELECT 
    OBJ#, COL#, substr(SYS_CONNECT_BY_PATH(NAME,','),2) COLS
  FROM COL$
  WHERE COL# > 0
  START WITH COL# = 1
  CONNECT BY PRIOR OBJ# = OBJ# AND PRIOR COL# = COL# - 1 ) COL,
  (SELECT OBJ#, COUNT(*) COLCNT FROM COL$ 
  WHERE COL# > 0 GROUP BY OBJ#) CN
  WHERE COL.OBJ# = CN.OBJ# AND COL.COL# = CN.COLCNT
) C
where u.user#=o.owner# and o.obj# = c.obj#
  and v.obj# = o.obj# and u.name=upper('&username');

说明
1) 分布执行,不能放置一个脚本文件中执行
2) 每条as后面的select语句可能需要重新格式化
3) Create view 语句最后需要增加”;”

二.source
导出对象

USER$
SOURCE$
OBJ$

执行sql语句

Set pages 10000
Set long 1000
set linse 1000
Spool d:\create_source.sql
SELECT DECODE(S.LINE,1,'CREATE OR REPLACE ','')||SOURCE SOURCE
FROM 
  USER$ U, OBJ$  O, SOURCE$ S
WHERE 
  U.USER# = O.OWNER# AND 
  O.OBJ# = S.OBJ# AND
  U.NAME = UPPER('&username')
  -- AND O.NAME = UPPER('&SOURCE_NAME')
ORDER BY S.OBJ#, S.LINE;

说明
1) 注意SOURCE中的用户名,如果导入不是相同用户,需要修改该脚本用户名
2) 修改完用户名后,直接执行生成脚本即可

三.Index
导出对象

USER$
OBJ$
COL$
IND$
ICOL$

执行sql

Set pages 10000
Set long 1000
set linse 1000
Spool d:\create_index.sql
SELECT 
  'CREATE '||decode(bitand(IDX.property, 1), 1, 'UNIQUE', '')||
  ' INDEX '||I.NAME||' ON '||T.NAME||'('||IDX.PATH||');' INDEX_DDL
FROM 
  USER$ U, OBJ$  T, OBJ$ I, 
  (
    select I.PROPERTY, I.BO#, I.OBJ#, C.POS#,
            SUBSTR(sys_connect_by_path(CN.NAME,','),2) path
    from IND$ I, ICOL$ C, COL$ CN
    WHERE I.OBJ# = C.OBJ# AND I.BO# = C.BO#
      AND I.BO# = CN.OBJ# AND C.COL# = CN.INTCOL#
    start with C.POS#=1 
    connect by PRIOR I.OBJ# = I.OBJ# 
            AND prior C.POS# = C.POS# - 1 ) IDX,
  (SELECT I.BO#, I.OBJ#, COUNT(*) COLCNT 
    FROM ICOL$ I GROUP BY I.BO#, I.OBJ#) IDXC
WHERE 
  U.USER# = T.OWNER# AND 
  IDX.BO# = T.OBJ# AND
  IDX.OBJ# = I.OBJ# AND
  IDX.BO# =  IDXC.BO# AND
  IDX.OBJ# = IDXC.OBJ# AND
  IDX.POS# = IDXC.COLCNT AND
  U.NAME = upper('&username')
ORDER BY T.NAME, I.NAME;

说明
1) 因为SYS_CONNECT_BY_PATH所以需要10g及其以上版本
2) SQL中没有分区唯一性索引
3) 注意检查sql是否因为行长度不够导致异常

四.Sequence
导出对象

USER$
OBJ$
SEQ$

执行sql语句

Set pages 10000
Set long 1000
set linse 1000
Spool d:\create_sequence.sql
SELECT 
  'CREATE SEQUENCE '|| SEQ_NAME || 
  ' MINVALUE '||minval ||
  ' MAXVALUE '||MAXVAL ||
  ' START WITH '||LASTVAL ||
  ' ' || CYC || ' ' || ORD ||
  DECODE(SIGN(CACHE), 1,' CACHE '|| CACHE, 'NOCACHE') ||
  ';' SEQ_DDL
from
  (select u.name OWNER, o.name SEQ_NAME,
      s.minvalue MINVAL, s.maxvalue MAXVAL, 
      s.increment$ INC,
      decode (s.cycle#, 0, 'NOCYCLE', 1, 'CYCLE ') CYC,
      decode (s.order$, 0, 'NOORDER', 1, 'ORDER') ORD,
      s.cache, s.highwater LASTVAL
  from seq$ s, obj$ o, user$ u
  where u.user# = o.owner#
    and o.obj# = s.obj# 
and u.name=upper('&username'));

五.TRIGGER
导出对象

OBJ$
USER$
TRIGGER$

执行sql语句

Set pages 10000
Set long 1000
set linse 1000
Spool d:\create_trigger.sql
select
   'CREATE OR REPLACE TRIGGER '|| trigger_name || chr(10)||
   decode( substr( trigger_type, 1, 1 ), 
           'A', 'AFTER ', 'B', 'BEFORE ', 'I', 
           'INSTEAD OF ' ) ||
   triggering_event || ' ON ' || table_owner || '.' ||
   table_name || chr(10) || REF_CLAUSE || chr(10) ||
   decode( instr( trigger_type, 'EACH ROW' ), 0, null,
           'FOR EACH ROW' ), trigger_body
from  (
   select trigusr.name owner, trigobj.name trigger_name,
      decode(t.type#, 0, 'BEFORE STATEMENT',
           1, 'BEFORE EACH ROW',   2, 'AFTER STATEMENT',
           3, 'AFTER EACH ROW',    4, 'INSTEAD OF',
           'UNDEFINED') trigger_type,
   decode(t.insert$*100 + t.update$*10 + t.delete$,
           100, 'INSERT', 010, 'UPDATE', 001, 'DELETE',
           110, 'INSERT OR UPDATE', 101, 'INSERT OR DELETE',
           011, 'UPDATE OR DELETE',
           111, 'INSERT OR UPDATE OR DELETE', 
           'ERROR') triggering_event,
   tabusr.name table_owner, tabobj.name table_name, 
   'REFERENCING NEW AS '||t.refnewname||' OLD AS '||t.refoldname REF_CLAUSE,
   t.whenclause,decode(t.enabled, 0, 'DISABLED', 1, 'ENABLED', 'ERROR') STATUS,
   t.definition , t.action# trigger_body
   from obj$ trigobj, obj$ tabobj, trigger$ t,
        user$ tabusr, user$ trigusr
   where (trigobj.obj#   = t.obj# and
       tabobj.obj#    = t.baseobject and
       tabobj.owner#  = tabusr.user# and
       trigobj.owner# = trigusr.user# and
       bitand(t.property, 63)     < 8 ))
where table_owner=upper('&username')
order by owner, trigger_name;

六. Dblink
导出对象

Sys.link$
sys.user$

执行查询sql

 SELECT U.NAME   USERNAME,
       L.NAME   DBLINK_NAME,
       L.USERID LOGIN_USER,
       L.HOST   TNS,
       L.CTIME
  FROM LINK$ L, USER$ U
 WHERE L.OWNER# = U.USER#
   AND U.NAME = UPPER('&USERNAME');

ORACLE DUL汇总

oracle数据库恢复三板斧,最大限度减少因为ORACLE不能open导致的数据损失
第一板:HIDE PARAMETER AND EVENT
第二板:BBED
第三板:DUL

当我们使用第一和第二板斧头无法解决问题之时,我们就需要考虑使用ORACLE数据库恢复终极工具DUL,这里对于dul的相关测试进行总结,便于查询
dul处理分区表
dul恢复drop表测试
dul抽取异常asm文件
使用dul恢复asm中数据
dul恢复truncate表测试
dul 10支持oracle 11g r2
使用 dul 挖数据文件初试
DUL挖ORACLE 8.0数据库
dul实现对数据文件内容更新
dul 10 export_mode=true功能增强
dul实现exp dump文件转换sqlldr格式
dul支持ORACLE 12C CDB数据库恢复
dul实现expdp dump文件转换sqlldr格式
使用DUL挖数据文件恢复非数据外对象方法
为推进国内DUL的发展,欢迎在DUL使用过程中的问题探讨

dul支持ORACLE 12C CDB数据库恢复

熟悉dul的朋友都知道dul是通过file# 1 block 1的kcvfhrdb找到bootstarp$的segment header(其实kcvfhrdb就是bootstarp$ segment header的rdba地址),然后通过bootstarp$中存储的相关sql找对一些基础的基表对象(obj$,tab$,col$,seg$等),然后通过他们定位到具体的对象的segment记录,从而通过segment找到extent分布,然后按照extent恢复数据(如果丢失system的情况,是通过扫描来确定extent属于哪个segment,然后恢复,该情况不在本次讨论范围之类)。在ORACLE 12C之前,一个实例最多都只有一个数据库,也就是说,在一个完整的数据库中只会存在一个bootstarp$,只要通过file# 1 block 1 定位到kcvfhrdb就可以读取数据库中的所有内容.但是从12C开始数据库引入了CDB的概念,也就是在一个CDB数据库中有了多个PDB数据库,那这些PDB数据库如果要编写类似dul之类工具将如何恢复出来,这里根据自己对于CDB的理解,先普及一些在CDB数据库中和bootstarp$表有关知识
bootstarp$表在每个PDB中都存在,可以通过bbed证明

--查看pdb相关信息
SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           MOUNTED
         4 PDB2                           READ WRITE NO
         5 ORA11G                         MOUNTED

SQL>  select con_id,header_file,header_block from cdb_segments where segment_name='BOOTSTRAP$';

    CON_ID HEADER_FILE HEADER_BLOCK
---------- ----------- ------------
         4          11          520
         1           1          520
         2           5          520
----因为有部分库未read write,所以查询cdb_segments未显示

--file 1
RMAN> copy datafile 1 to '/tmp/system_01.dbf';

BBED> set block 1
        BLOCK#          1

BBED> map
 File: /tmp/system_01.dbf (0)
 Block: 1                                     Dba:0x00000000
------------------------------------------------------------
 Data File Header

 struct kcvfh, 1112 bytes                   @0       

 ub4 tailchk                                @8188    

BBED> p kcvfhrdb
ub4 kcvfhrdb                                @96       0x00400208

SQL> select to_number('400208','xxxxxxxxxx') from dual;

TO_NUMBER('400208','XXXXXXXXXX')
--------------------------------
                         4194824

SQL> select dbms_utility.data_block_address_block(4194824) "block",
  2  dbms_utility.data_block_address_file(4194824) "file" from dual;

     block       file
---------- ----------
       520          1

----可以知道bootstarp$起点的rdba为4194824,在rfile# 1 block# 520上

--file 11
RMAN> copy datafile 11 to '/tmp/system_11.dbf';

BBED> set filename '/tmp/system_11.dbf'
        FILENAME        /tmp/system_11.dbf

BBED> set block 1
        BLOCK#          1

BBED> p kcvfhrdb
ub4 kcvfhrdb                                @96       0x00400208

---显示的rdba地址完全与file# 1中的kcvfhrdb相同,也就是表示rfile# 1 block# 520

--验证未mount pdb,并且从11.2.0.4升级到12.1.0.1
ASMCMD> cp system01.dbf /tmp/system_18.dbf
copying +data/ora11g/system01.dbf -> /tmp/system_18.dbf

BBED> set filename '/tmp/system_18.dbf'
        FILENAME        /tmp/system_18.dbf

BBED>  set block 1
        BLOCK#          1

BBED> p kcvfhrdb
ub4 kcvfhrdb                                @96       0x0041ad40

SQL> select to_number('41ad40','xxxxxxxxx') from dual;

TO_NUMBER('41AD40','XXXXXXXXX')
-------------------------------
                        4304192

SQL> select dbms_utility.data_block_address_block(4304192) "block",
  2  dbms_utility.data_block_address_file(4304192) "file" from dual;

     block       file
---------- ----------
    109888          1
----可以知道bootstarp$起点的rdba为4304192,在rfile# 1 block# 109888上

查询contrainer$视图确认bootstarp$

SQL> select a.con_id#, a.dbid, a.rdba, dbms_utility.data_block_address_file(a.rdba) "file",
2    dbms_utility.data_block_address_block(a.rdba) "block"from container$ a;

   CON_ID#       DBID       RDBA       file      block
---------- ---------- ---------- ---------- ----------
         1 1922813718    4194824          1        520
         5 4211303690    4304192          1     109888
         2 4048821679    4194824          1        520
         4 3872456618    4194824          1        520
         3 3313918585    4194824          1        520

通过上面的知识点,我们明确,在ORACLE 12C CDB设计理念中,为了和12C之前的版本兼用(12C之前的版本可以通过PDB插入到CDB中),也为了方便用户在操作PDB时候和传统数据库一样,没有任何区别,所以它把每个PDB的rdba的计算方法认为PDB内部的RELFILE#是从1开始(也就是说每个rdba都是相对于自己的pdb而言),所以这里的contrainer$查询出来的rdba的地址就比较好理解(并非是绝对文件号,而是相对文件号,即表示pdb的第一个数据文件[传统的system01.dbf])

rdba中的file#和cdb中的file#关系

SQL> show con_name;

CON_NAME
------------------------------
PDB2
SQL> select file#, RELFILE#   from file$;

     FILE#   RELFILE#
---------- ----------
        12          4
        11          1
        13         13

SQL> show con_name;

CON_NAME
------------------------------
CDB$ROOT
SQL> select file#, RELFILE#   from file$;

     FILE#   RELFILE#
---------- ----------
         1          1
         3          3
         5
         6          6
         2
         4          4

6 rows selected.

通过这里的分析,就可以清晰的知道当前的dul是完全可以处理ORACLE 12C的CDB数据库.

dul恢复CDB中PDB数据

--在pdb中创建测试表
SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           MOUNTED
         4 PDB2                           READ WRITE NO
         5 ORA11G                         MOUNTED

SQL> alter session set container=pdb2;

Session altered.

SQL> show con_name;

CON_NAME
------------------------------
PDB2

SQL> show con_id

CON_ID
------------------------------
3

SQL> create user xff identified by xifenfei;

User created.

SQL> grant dba to xff;

Grant succeeded.

SQL> create table xff.t_xifenfei tablespace users 
  2  as select * from dba_objects;

Table created.

SQL> alter system checkpoint;

System altered.

SQL> select count(*) from xff.t_xifenfei;

  COUNT(*)
----------
     90756

--使用dul抽取数据
[oracle@xifenfei dul]$ ./dul


 Strictly Oracle Internal Use Only


DUL: Warning: Recreating file "dul.log"
Disk group DATA, dul group_cid 0
Discovered disk /dev/sdb as diskgroup DATA, disk number 0 size 20480 Mb File1 starts at 10, dul_disk_cid 0

DUL: Warning: Dictionary cache DC_ASM_EXTENTS is empty
Probing for attributes in File9, the attribute directory, for disk group DATA
attribute name "_extent_sizes", value "1 4 16"
attribute name "_extent_counts", value "20000 20000 214748367"
Oracle data file size 283123712 bytes, block size 8192
Found db_id = 1922813718
Found db_name = CDB
Oracle data file size 713039872 bytes, block size 8192
DUL> bootstrap;
Probing file = 1, block = 520
. unloading table                BOOTSTRAP$
DUL: Warning: block number is non zero but marked deferred trying to process it anyhow
      60 rows unloaded
DUL: Warning: Dictionary cache DC_BOOTSTRAP is empty
Reading BOOTSTRAP.dat 60 entries loaded
Parsing Bootstrap$ contents
DUL: Warning: Recreating file "dict.ddl"
Generating dict.ddl for version 11
 OBJ$: segobjno 18, file 1 block 240
 TAB$: segobjno 2, tabno 1, file 1  block 144
 COL$: segobjno 2, tabno 5, file 1  block 144
 USER$: segobjno 10, tabno 1, file 1  block 208
Running generated file "@dict.ddl" to unload the dictionary tables
. unloading table                      OBJ$   90758 rows unloaded
. unloading table                      TAB$    2363 rows unloaded
. unloading table                      COL$  106731 rows unloaded
. unloading table                     USER$     124 rows unloaded
Reading USER.dat 124 entries loaded
Reading OBJ.dat 90758 entries loaded and sorted 90758 entries
Reading TAB.dat 2363 entries loaded
Reading COL.dat 106685 entries loaded and sorted 106685 entries
Reading BOOTSTRAP.dat 60 entries loaded

DUL: Warning: Recreating file "dict.ddl"
Generating dict.ddl for version 11
 OBJ$: segobjno 18, file 1 block 240
 TAB$: segobjno 2, tabno 1, file 1  block 144
 COL$: segobjno 2, tabno 5, file 1  block 144
 USER$: segobjno 10, tabno 1, file 1  block 208
 TABPART$: segobjno 692, file 1 block 4528
 INDPART$: segobjno 697, file 1 block 4568
 TABCOMPART$: segobjno 714, file 1 block 9880
 INDCOMPART$: segobjno 719, file 0 block 0
 TABSUBPART$: segobjno 704, file 1 block 9928
 INDSUBPART$: segobjno 709, file 0 block 0
 IND$: segobjno 2, tabno 3, file 1  block 144
 ICOL$: segobjno 2, tabno 4, file 1  block 144
 LOB$: segobjno 2, tabno 6, file 1  block 144
 COLTYPE$: segobjno 2, tabno 7, file 1  block 144
 TYPE$: segobjno 619, tabno 1, file 1  block 1528
 COLLECTION$: segobjno 619, tabno 2, file 1  block 1528
 ATTRIBUTE$: segobjno 619, tabno 3, file 1  block 1528
 LOBFRAG$: segobjno 725, file 1 block 4616
 LOBCOMPPART$: segobjno 728, file 0 block 0
 UNDO$: segobjno 15, file 1 block 224
 TS$: segobjno 6, tabno 2, file 1  block 176
 PROPS$: segobjno 126, file 1 block 1096
Running generated file "@dict.ddl" to unload the dictionary tables
. unloading table                      OBJ$
DUL: Warning: Recreating file "OBJ.ctl"
   90758 rows unloaded
. unloading table                      TAB$
DUL: Warning: Recreating file "TAB.ctl"
    2363 rows unloaded
. unloading table                      COL$
DUL: Warning: Recreating file "COL.ctl"
  106731 rows unloaded
. unloading table                     USER$
DUL: Warning: Recreating file "USER.ctl"
     124 rows unloaded
. unloading table                  TABPART$     234 rows unloaded
. unloading table                  INDPART$     155 rows unloaded
. unloading table               TABCOMPART$       1 row  unloaded

DUL: Error: dc_segment_header(dataobj#=719, ts#=0, fil=0, blk=0) failed
DUL: Warning: Nothing to unload from empty delayed segment creation table INDCOMPART$
. unloading table               TABSUBPART$      32 rows unloaded

DUL: Error: dc_segment_header(dataobj#=709, ts#=0, fil=0, blk=0) failed
DUL: Warning: Nothing to unload from empty delayed segment creation table INDSUBPART$
. unloading table                      IND$    4237 rows unloaded
. unloading table                     ICOL$    6290 rows unloaded
. unloading table                      LOB$     849 rows unloaded
. unloading table                  COLTYPE$    2567 rows unloaded
. unloading table                     TYPE$    3651 rows unloaded
. unloading table               COLLECTION$    1345 rows unloaded
. unloading table                ATTRIBUTE$   13755 rows unloaded
. unloading table                  LOBFRAG$       6 rows unloaded

DUL: Error: dc_segment_header(dataobj#=728, ts#=0, fil=0, blk=0) failed
DUL: Warning: Nothing to unload from empty delayed segment creation table LOBCOMPPART$
. unloading table                     UNDO$       1 row  unloaded
. unloading table                       TS$       4 rows unloaded
. unloading table                    PROPS$      38 rows unloaded
Reading USER.dat 124 entries loaded
Reading OBJ.dat 90758 entries loaded and sorted 90758 entries
Reading TAB.dat 2363 entries loaded
Reading COL.dat 106685 entries loaded and sorted 106685 entries
Reading TABPART.dat 234 entries loaded and sorted 234 entries
Reading TABCOMPART.dat 1 entries loaded and sorted 1 entries
Reading TABSUBPART.dat 32 entries loaded and sorted 32 entries
Reading INDPART.dat 155 entries loaded and sorted 155 entries
Reading IND.dat 4237 entries loaded
Reading LOB.dat 849 entries loaded
Reading ICOL.dat 6290 entries loaded
Reading COLTYPE.dat 2567 entries loaded
Reading TYPE.dat 3651 entries loaded
Reading ATTRIBUTE.dat 13755 entries loaded
Reading COLLECTION.dat
DUL: Warning: Increased the size of DC_COLLECTIONS from 1024 to 8192 entries
 1345 entries loaded
Reading BOOTSTRAP.dat 60 entries loaded
Reading LOBFRAG.dat 6 entries loaded and sorted 6 entries
Reading UNDO.dat 1 entries loaded
Reading TS.dat 4 entries loaded
Reading PROPS.dat 38 entries loaded
Database character set is ZHS16GBK
Database national character set is AL16UTF16
DUL> unload table xff.t_xifenfei;
. unloading table                T_XIFENFEI   90756 rows unloaded

核对结果

SQL> create table xff.t_xifenfei_new as select  * from xff.t_xifenfei where 1=0;

Table created.

[oracle@xifenfei dul]$ sqlldr xff/xifenfei@pdb2 control=XFF_T_XIFENFEI.ctl 

SQL*Loader: Release 12.1.0.1.0 - Production on Sun Jun 2 18:08:04 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Path used:      Conventional
Commit point reached - logical record count 64
Commit point reached - logical record count 128
Commit point reached - logical record count 192
Commit point reached - logical record count 256
Commit point reached - logical record count 320
Commit point reached - logical record count 384
Commit point reached - logical record count 448
Commit point reached - logical record count 512
Commit point reached - logical record count 576
…………
Commit point reached - logical record count 90589
Commit point reached - logical record count 90653
Commit point reached - logical record count 90717
Commit point reached - logical record count 90756

Table "XFF"."T_XIFENFEI_NEW":
  90756 Rows successfully loaded.

Check the log file:
  XFF_T_XIFENFEI.log
for more information about the load.

SQL> select count(*) from xff.t_xifenfei_new;

  COUNT(*)
----------
     90756

通过分析12C的bootstarp$表分布,和dul恢复数据库原理,通过变动实现dul完美恢复CDB中的pdb数据

dul处理分区表

创建SALES分区表案例

CREATE TABLE SALES 
( 
  PRODUCT_ID VARCHAR2(5), 
  SALES_DATE DATE, 
  SALES_COST NUMBER(10), 
  STATUS VARCHAR2(20) 
) 
PARTITION BY RANGE(SALES_DATE) 
  SUBPARTITION BY LIST (STATUS) 
  SUBPARTITION TEMPLATE 
  ( 
    SUBPARTITION SUB1 VALUES ('ACTIVE') , 
    SUBPARTITION SUB2 VALUES ('INACTIVE') 
  ) 
( 
  PARTITION P1 VALUES LESS THAN (TO_DATE('2003-01-01','YYYY-MM-DD')), 
  PARTITION P2 VALUES LESS THAN (TO_DATE('2003-03-01','YYYY-MM-DD'))
) 
/
INSERT INTO SALES VALUES('00001','01-Jan-02',100,'ACTIVE') 
/ 
INSERT INTO SALES VALUES('00002','01-Jan-01',200,'ACTIVE') 
/ 
INSERT INTO SALES VALUES('00003','01-Feb-03',300,'INACTIVE') 
/ 
INSERT INTO SALES VALUES('00004','04-Feb-03',300,'INACTIVE') 
/ 
INSERT INTO SALES VALUES('00005','04-Feb-02',300,'INACTIVE') 
/ 

查询结果

SQL> select * from sales;

PRODU SALES_DATE   SALES_COST STATUS
----- ------------ ---------- --------------------
00001 01-JAN-02           100 ACTIVE
00002 01-JAN-01           200 ACTIVE
00005 04-FEB-02           300 INACTIVE
00003 01-FEB-03           300 INACTIVE
00004 04-FEB-03           300 INACTIVE

SQL> select  * from sales PARTITION(p1);

PRODU SALES_DATE   SALES_COST STATUS
----- ------------ ---------- --------------------
00001 01-JAN-02           100 ACTIVE
00002 01-JAN-01           200 ACTIVE
00005 04-FEB-02           300 INACTIVE

SQL> select  * from sales PARTITION(p2);

PRODU SALES_DATE   SALES_COST STATUS
----- ------------ ---------- --------------------
00003 01-FEB-03           300 INACTIVE
00004 04-FEB-03           300 INACTIVE

SQL> select  * from sales SUBPARTITION(p1_sub1);

PRODU SALES_DATE   SALES_COST STATUS
----- ------------ ---------- --------------------
00001 01-JAN-02           100 ACTIVE
00002 01-JAN-01           200 ACTIVE

SQL> select  * from sales SUBPARTITION(p1_sub2);

PRODU SALES_DATE   SALES_COST STATUS
----- ------------ ---------- --------------------
00005 04-FEB-02           300 INACTIVE

SQL> select  * from sales SUBPARTITION(p2_sub1);

no rows selected

SQL> select  * from sales SUBPARTITION(p2_sub2);

PRODU SALES_DATE   SALES_COST STATUS
----- ------------ ---------- --------------------
00003 01-FEB-03           300 INACTIVE
00004 04-FEB-03           300 INACTIVE

启动dul

[oracle@xifenfei dul]$ ./dul

Data UnLoader: 10.2.0.5.20 - Internal Only - on Sat Jan 19 17:37:45 2013
with 64-bit io functions

Copyright (c) 1994 2013 Bernard van Duijnen All rights reserved.

 Strictly Oracle Internal Use Only


DUL: Warning: Recreating file "dul.log"
Reading USER.dat 91 entries loaded
Reading OBJ.dat 74764 entries loaded and sorted 74764 entries
Reading TAB.dat 2882 entries loaded
Reading COL.dat 94598 entries loaded and sorted 94598 entries
Reading SEG.dat 17 entries loaded
Reading EXT.dat 43 entries loaded and sorted 43 entries
Reading TABPART.dat 150 entries loaded and sorted 150 entries
Reading TABCOMPART.dat 3 entries loaded and sorted 3 entries
Reading TABSUBPART.dat 36 entries loaded and sorted 36 entries
Reading INDPART.dat 169 entries loaded and sorted 169 entries
Reading INDCOMPART.dat 0 entries loaded and sorted 0 entries
Reading INDSUBPART.dat 0 entries loaded and sorted 0 entries
Reading IND.dat 5150 entries loaded
Reading LOB.dat
DUL: Warning: Increased the size of DC_LOBS from 1024 to 8192 entries
 1286 entries loaded
Reading ICOL.dat 7569 entries loaded
Reading COLTYPE.dat 3003 entries loaded
Reading TYPE.dat 2872 entries loaded
Reading ATTRIBUTE.dat 11127 entries loaded
Reading COLLECTION.dat 985 entries loaded
Reading COMPATSEG.dat 0 entries loaded
Reading BOOTSTRAP.dat 60 entries loaded
Reading LOBFRAG.dat 1 entries loaded and sorted 1 entries
Reading LOBCOMPPART.dat 0 entries loaded and sorted 0 entries
Reading UNDO.dat 21 entries loaded
Reading TS.dat 10 entries loaded
Reading PROPS.dat 36 entries loaded
Database character set is ZHS16GBK
Database national character set is AL16UTF16
Found db_id = 4188950066
Found db_name = ORA11G

unload 语法

UNLOAD [TABLE]  [  schema_name . ]  table_name 
              [ PARTITION(  partition_name ) ]
              [ SUBPARTITION(  sub_partition_name ) ]
              [ (  column_definitions ) ]
              [  cluster_clause  ]
              [  storage_clause  ] ;

unload整个表

DUL> unload table chf.SALES;
. unloading table                     SALES
 . Unloading partition                      P1
 .   Unloading sub partition                 P1_SUB1
 .   Unloading sub partition                 P1_SUB2
 . Unloading partition                      P2
 .   Unloading sub partition                 P2_SUB1
 .   Unloading sub partition                 P2_SUB2
.           table SALES total        5 rows unloaded

[root@xifenfei dul]# ls -l CHF_SALES*
-rw-r--r-- 1 oracle oinstall 421 Jan 19 18:09 CHF_SALES.ctl
-rw-r--r-- 1 oracle oinstall 251 Jan 19 18:09 CHF_SALES.dat
[root@xifenfei dul]# more CHF_SALES.dat
|00001| |01-JAN-2002 AD 00:00:00| |100| |ACTIVE|
|00002| |01-JAN-2001 AD 00:00:00| |200| |ACTIVE|
|00005| |04-FEB-2002 AD 00:00:00| |300| |INACTIVE|
|00003| |01-FEB-2003 AD 00:00:00| |300| |INACTIVE|
|00004| |04-FEB-2003 AD 00:00:00| |300| |INACTIVE|

unload 分区表

DUL> unload table chf.SALES PARTITION(p1);
. unloading table                     SALES
 . Unloading partition                      P1
 .   Unloading sub partition                 P1_SUB1
 .   Unloading sub partition                 P1_SUB2
.           table SALES total        3 rows unloaded

[root@xifenfei dul]# ls -l CHF_SALES_P1*
-rw-r--r-- 1 oracle oinstall 424 Jan 19 18:10 CHF_SALES_P1.ctl
-rw-r--r-- 1 oracle oinstall 149 Jan 19 18:10 CHF_SALES_P1.dat
[root@xifenfei dul]# more CHF_SALES_P1.dat
|00001| |01-JAN-2002 AD 00:00:00| |100| |ACTIVE|
|00002| |01-JAN-2001 AD 00:00:00| |200| |ACTIVE|
|00005| |04-FEB-2002 AD 00:00:00| |300| |INACTIVE|

unload 子分区表

DUL> unload table chf.SALES SUBPARTITION(p2_SUB2);
. unloading table                     SALES
 . Unloading partition                      P1
 . Unloading partition                      P2
 .   Unloading sub partition                 P2_SUB2
.           table SALES total        2 rows unloaded

[root@xifenfei dul]# ls -l CHF_SALES_P2_SUB2*
-rw-r--r-- 1 oracle oinstall 429 Jan 19 18:14 CHF_SALES_P2_SUB2.ctl
-rw-r--r-- 1 oracle oinstall 102 Jan 19 18:14 CHF_SALES_P2_SUB2.dat
[root@xifenfei dul]# more CHF_SALES_P2_SUB2.dat
|00003| |01-FEB-2003 AD 00:00:00| |300| |INACTIVE|
|00004| |04-FEB-2003 AD 00:00:00| |300| |INACTIVE|

验证控制文件

[root@xifenfei dul]# ls -l CHF_SALES*
-rw-r--r-- 1 oracle oinstall 421 Jan 19 18:09 CHF_SALES.ctl
-rw-r--r-- 1 oracle oinstall 251 Jan 19 18:09 CHF_SALES.dat
-rw-r--r-- 1 oracle oinstall 424 Jan 19 18:10 CHF_SALES_P1.ctl
-rw-r--r-- 1 oracle oinstall 149 Jan 19 18:10 CHF_SALES_P1.dat
-rw-r--r-- 1 oracle oinstall 429 Jan 19 18:14 CHF_SALES_P2_SUB2.ctl
-rw-r--r-- 1 oracle oinstall 102 Jan 19 18:14 CHF_SALES_P2_SUB2.dat
[root@xifenfei dul]# more CHF_SALES.ctl
load data
CHARACTERSET ZHS16GBK
infile 'CHF_SALES.dat'
insert
into table "CHF"."SALES"
fields terminated by whitespace
(
  "PRODUCT_ID"                       CHAR(5) enclosed by X'7C'       
 ,"SALES_DATE"                       DATE "DD-MON-YYYY AD HH24:MI:SS" enclosed by X'7C'
 ,"SALES_COST"                       CHAR(3) enclosed by X'7C'       
 ,"STATUS"                           CHAR(8) enclosed by X'7C'       
)
[root@xifenfei dul]# more CHF_SALES_P1.ctl
load data
CHARACTERSET ZHS16GBK
infile 'CHF_SALES_P1.dat'
insert
into table "CHF"."SALES"
fields terminated by whitespace
(
  "PRODUCT_ID"                       CHAR(5) enclosed by X'7C'       
 ,"SALES_DATE"                       DATE "DD-MON-YYYY AD HH24:MI:SS" enclosed by X'7C'
 ,"SALES_COST"                       CHAR(3) enclosed by X'7C'       
 ,"STATUS"                           CHAR(8) enclosed by X'7C'       
)
[root@xifenfei dul]# more CHF_SALES_P2_SUB2.ctl
load data
CHARACTERSET ZHS16GBK
infile 'CHF_SALES_P2_SUB2.dat'
insert
into table "CHF"."SALES"
fields terminated by whitespace
(
  "PRODUCT_ID"                       CHAR(5) enclosed by X'7C'       
 ,"SALES_DATE"                       DATE "DD-MON-YYYY AD HH24:MI:SS" enclosed by X'7C'
 ,"SALES_COST"                       CHAR(3) enclosed by X'7C'       
 ,"STATUS"                           CHAR(8) enclosed by X'7C'       
)

这里证明所有的控制文件中的表结构都是整个表的结构,而不是分区表,在实际处理过程中,可以考虑交换分区来实现

完美解决dul处理clob字段乱码问题

使用过dul的人都知道,dul在处理clob字段的时候,经常性出现乱码,而且官方没有提供好的处理方法,通过不断的测试折腾终于在2013年春节解决了2012年困惑在我心中的疑惑,不再因为dul不可以处理clob而觉得odu/aul更加高级。这个算是2013年给自己的第一份ORACLE数据库恢复方面大礼包.
在dul中,虽然提供了LDR_OUTPUT_IN_UTF8选项,让所有的clob变为UTF8,但是在实际测试中,没有成功.这里对于clob字段处理结果对比:
配置init.dul参数

osd_big_endian_flag=false
osd_dba_file_bits=10
osd_c_struct_alignment=32
osd_file_leader_size=1
osd_word_size = 32
feedback = 1000
dc_columns=2000000
dc_tables=10000
dc_objects=1000000
dc_users=400
dc_segments=100000
USE_LOB_FILES =TRUE
control_file = control.dul
LDR_ENCLOSE_CHAR=|
db_block_size=8192
export_mode=FALSE
compatible=11

正常情况下测试clob字段

[oracle@xifenfei dul]$ ./dul

Data UnLoader: 10.2.0.5.20 - Internal Only - on Sat Jan 19 00:19:05 2013
with 64-bit io functions

Copyright (c) 1994 2013 Bernard van Duijnen All rights reserved.

 Strictly Oracle Internal Use Only


DUL> unload table chf.EVT_T_COMMON_LOG;
. unloading (index organized) table     LOB024001aa
DUL: Warning: Recreating file "LOB024001aa.ctl"
       0 rows unloaded
. unloading (index organized) table     LOB024001da
DUL: Warning: Recreating file "LOB024001da.ctl"
       0 rows unloaded
. unloading (index organized) table     LOB0240020a
DUL: Warning: Recreating file "LOB0240020a.ctl"
       0 rows unloaded
. unloading (index organized) table     LOB0240023a
DUL: Warning: Recreating file "LOB0240023a.ctl"
       0 rows unloaded
. unloading (index organized) table     LOB0240026a
DUL: Warning: Recreating file "LOB0240026a.ctl"
       0 rows unloaded
Preparing lob metadata from lob index
Reading LOB024001aa.dat 0 entries loaded and sorted 0 entries
Preparing lob metadata from lob index
Reading LOB024001da.dat 0 entries loaded and sorted 0 entries
Preparing lob metadata from lob index
Reading LOB0240020a.dat 0 entries loaded and sorted 0 entries
Preparing lob metadata from lob index
Reading LOB0240023a.dat 0 entries loaded and sorted 0 entries
Preparing lob metadata from lob index
Reading LOB0240026a.dat 0 entries loaded and sorted 0 entries
. unloading table          EVT_T_COMMON_LOG
DUL: Warning: Recreating file "CHF_EVT_T_COMMON_LOG.ctl"
    1863 rows unloaded

--修改CHF_EVT_T_COMMON_LOG.ctl中的导入表名为TEST.T_TEST

--创建测试表
SQL> create table t_test as
  2   select * from chf.EVT_T_COMMON_LOG where 1=0;

Table created.

--导入数据
[oracle@xifenfei dul]$ sqlldr test/test control=CHF_EVT_T_COMMON_LOG.ctl 

SQL*Loader: Release 11.2.0.3.0 - Production on Fri Jan 18 23:50:32 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Commit point reached - logical record count 64
Commit point reached - logical record count 128
…………
Commit point reached - logical record count 1856
Commit point reached - logical record count 1863

--测试数据是否乱码
SQL> desc chf.EVT_T_COMMON_LOG
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                        NOT NULL VARCHAR2(64)
 CASES_OF_STATISTICAL_SITATION                      CLOB
…………
 DEPARTMENT_ID                                      VARCHAR2(40)
 MOTIFY_MAN_ID                                      VARCHAR2(40)

SQL>  select CASES_OF_STATISTICAL_SITATION from t_test where rownum<3;

CASES_OF_STATISTICAL_SITATION
--------------------------------------------------------------------------------
b*kb

     _SfZz

通过试验证明,在dul处理clob字段的时候,很容易出现乱码,因为这里涉及到很多中情况(NLS_LANG,LANG,LDR_OUTPUT_IN_UTF8参数等),经过了多次试验,均不能成功,这里就是为了给出来一个大概的结论:dul在正常情况下不能完美的处理非英文的clob

修改后lob字段文件属性后测试clob字段

--修改CHF_EVT_T_COMMON_LOG.ctl中的导入表名为TEST.EVT_T_COMMON_LOG
--导入数据
[oracle@xifenfei dul]$ sqlldr test/test control=CHF_EVT_T_COMMON_LOG.ctl 

SQL*Loader: Release 11.2.0.3.0 - Production on Fri Jan 18 23:50:32 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Commit point reached - logical record count 64
Commit point reached - logical record count 128
…………
Commit point reached - logical record count 1856
Commit point reached - logical record count 1863

--测试数据是否乱码
SQL> select CASES_OF_STATISTICAL_SITATION from EVT_T_COMMON_LOG where rownum<3;

CASES_OF_STATISTICAL_SITATION
--------------------------------------------------------------------------------
1、案件统计情况截止至交班时间C时间 0 分),今日立难点问题C7,国庆北路桂门岭社区丽都花园路口,多次上

测试证明:修改了clob文件的相关属性后,完美实现dul处理clob乱码问题