incaseformat 病毒删除文件恢复

联系:手机/微信(+86 13429648788) QQ(107644445)QQ咨询惜分飞

标题:incaseformat 病毒删除文件恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

一夜之间大量朋友反馈:电脑中除C盘之外的其他磁盘文件都被删除,且磁盘中可能被创建“incaseformat”文本文档
20210113160941
20210113161009


使用360扫描病毒如下
t01859d4dc0ddb2ba76

确认问题原因是电脑中病毒后,病毒文件通过DeleteFileA和RemoveDirectory代码实现了删除文件和目录的行为。此病毒启动后将自身复制到C:\WINDOWS\tsay.exe并创建启动项退出,等待重启运行,下次开机启动后约20s就开始删除行。发现文件不见了但空间占用还正常的,不要重启,先备份数据库。如果不小心已经重启而且无法恢复数据.请不要对该分区进行任何写操作,数据理论上绝大部分可以恢复.如果无法恢复,或者恢复出来的文件大量坏块,无法正常使用,可以联系我们进行最大限度恢复.

-bash: /bin/rm: Argument list too long

联系:手机/微信(+86 13429648788) QQ(107644445)QQ咨询惜分飞

标题:-bash: /bin/rm: Argument list too long

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

linux批量删除大量文件,当使用rm -rf *报-bash: /bin/rm: Argument list too long错误可以使用find+xargs搞定

[grid@xifenfei audit]$ rm -rf +ASM2_ora_1*_2017*.aud
-bash: /bin/rm: Argument list too long
[grid@xifenfei audit]$ ls|wc -l
111650450
[grid@xifenfei audit]$ find ./ -name "*.aud" |xargs rm -r
[grid@xifenfei audit]$ ls
[grid@xifenfei audit]$

ext3/ext4 superblock recovery

创建ext4文件系统

[root@localhost ~]# mkfs.ext4 /dev/sdb1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1310720 inodes, 5242624 blocks
262131 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2153775104
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   

[root@localhost ~]# mkdir /sdb
[root@localhost ~]# mount /dev/sdb1 /sdb
[root@localhost ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/ol-root   36G  4.0G   32G  12% /
devtmpfs             1.8G     0  1.8G   0% /dev
tmpfs                1.8G     0  1.8G   0% /dev/shm
tmpfs                1.8G  8.9M  1.8G   1% /run
tmpfs                1.8G     0  1.8G   0% /sys/fs/cgroup
/dev/sda1            497M  195M  303M  40% /boot
tmpfs                369M     0  369M   0% /run/user/0
/dev/sdb1             20G   45M   19G   1% /sdb

准备测试数据

[root@localhost sdb]# cd /etc/sysctl.d/
[root@localhost sysctl.d]# ls
99-sysctl.conf
[root@localhost sysctl.d]# cp 99-sysctl.conf /sdb
[root@localhost sysctl.d]# more 99-sysctl.conf 
# System default settings live in /usr/lib/sysctl.d/00-system.conf.
# To override those settings, enter new settings here, or in an /etc/sysctl.d/<name>.conf file
#
# For more information, see sysctl.conf(5) and sysctl.d(5).

破坏ext4文件系统

[root@localhost ~]#  dd if=/dev/zero of=/dev/sdb1 bs=1024 count=5
5+0 records in
5+0 records out
5120 bytes (5.1 kB) copied, 0.00270838 s, 1.9 MB/s
[root@localhost ~]# mount /dev/sdb1 /sdb
mount: unknown filesystem type '(null)'

日志报错

[ 8868.362628] sd 32:0:1:0: [sdb] Cache data unavailable
[ 8868.362632] sd 32:0:1:0: [sdb] Assuming drive cache: write through
[ 8868.363714]  sdb: sdb1
[ 8868.390297] sd 32:0:1:0: [sdb] Cache data unavailable
[ 8868.390301] sd 32:0:1:0: [sdb] Assuming drive cache: write through
[ 8868.391462]  sdb: sdb1
[ 8900.130143] EXT4-fs (sdb1): mounted filesystem with ordered data mode. Opts: (null)
[ 8900.130163] SELinux: initialized (dev sdb1, type ext4), uses xattr
[ 8902.803966] sdb1: WRITE SAME failed. Manually zeroing.
1
fsck修复
1
[root@localhost ~]# fsck -t ext4 /dev/sdb1
fsck from util-linux 2.23.2
e2fsck 1.42.9 (28-Dec-2013)
ext2fs_open2: Bad magic number in super-block
fsck.ext4: Superblock invalid, trying backup blocks...
/dev/sdb1 was not cleanly unmounted, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Free blocks count wrong for group #1 (31740, counted=31739).
Fix<y>? yes
Free blocks count wrong (5116302, counted=5116301).
Fix<y>? yes
Free inodes count wrong for group #0 (8181, counted=8180).
Fix<y>? yes
Free inodes count wrong (1310709, counted=1310708).
Fix<y>? yes

/dev/sdb1: ***** FILE SYSTEM WAS MODIFIED *****
/dev/sdb1: 12/1310720 files (0.0% non-contiguous), 126323/5242624 blocks

测试修复结果

[root@localhost ~]# 
[root@localhost ~]# 
[root@localhost ~]# mount /dev/sdb1 /sdb
[root@localhost ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/ol-root   36G  4.0G   32G  12% /
devtmpfs             1.8G     0  1.8G   0% /dev
tmpfs                1.8G     0  1.8G   0% /dev/shm
tmpfs                1.8G  8.9M  1.8G   1% /run
tmpfs                1.8G     0  1.8G   0% /sys/fs/cgroup
/dev/sda1            497M  195M  303M  40% /boot
tmpfs                369M     0  369M   0% /run/user/0
/dev/sdb1             20G   45M   19G   1% /sdb
[root@localhost ~]# cd /sdb
[root@localhost sdb]# ls
99-sysctl.conf  lost+found
[root@localhost sdb]# more 99-sysctl.conf 
# System default settings live in /usr/lib/sysctl.d/00-system.conf.
# To override those settings, enter new settings here, or in an /etc/sysctl.d/<name>.conf file
#
# For more information, see sysctl.conf(5) and sysctl.d(5).

ext4文件系统修复

[root@localhost ~]# mkfs.ext3 /dev/sdb1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1310720 inodes, 5242624 blocks
262131 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   

[root@localhost ~]# mount /dev/sdb1 /sdb
[root@localhost ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/ol-root   36G  4.0G   32G  12% /
devtmpfs             1.8G     0  1.8G   0% /dev
tmpfs                1.8G     0  1.8G   0% /dev/shm
tmpfs                1.8G  8.9M  1.8G   1% /run
tmpfs                1.8G     0  1.8G   0% /sys/fs/cgroup
/dev/sda1            497M  195M  303M  40% /boot
tmpfs                369M     0  369M   0% /run/user/0
/dev/sdb1             20G   45M   19G   1% /sdb
[root@localhost ~]# dd if=/dev/zero of=/dev/sdb1 bs=1024 count=5
5+0 records in
5+0 records out
5120 bytes (5.1 kB) copied, 0.0138915 s, 369 kB/s
[root@localhost ~]# fsck -t ext3 /dev/sdb1
fsck from util-linux 2.23.2
e2fsck 1.42.9 (28-Dec-2013)
ext2fs_open2: Bad magic number in super-block
fsck.ext3: Superblock invalid, trying backup blocks...
/dev/sdb1 was not cleanly unmounted, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

/dev/sdb1: ***** FILE SYSTEM WAS MODIFIED *****
/dev/sdb1: 11/1310720 files (0.0% non-contiguous), 126322/5242624 blocks
[root@localhost ~]# mount /dev/sdb1 /sdb
[root@localhost ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/ol-root   36G  4.0G   32G  12% /
devtmpfs             1.8G     0  1.8G   0% /dev
tmpfs                1.8G     0  1.8G   0% /dev/shm
tmpfs                1.8G  8.9M  1.8G   1% /run
tmpfs                1.8G     0  1.8G   0% /sys/fs/cgroup
/dev/sda1            497M  195M  303M  40% /boot
tmpfs                369M     0  369M   0% /run/user/0
/dev/sdb1             20G   45M   19G   1% /sdb

fsck修复危险性较大,建议先备份对应的分区(dd命令备份分区)然后再处理,有导致分区数据全部或者部分丢失的风险

Solaris rm datafile recovery

今天早上接到有客户恢复请求,他一不小心在solaris系统中使用rm -rf oradata命令把一个分区下面的所有数据文件全部删除了。现在不知道怎么办,请求我们给予支持.上去检查发现比较幸运,数据库没有crash,看来直接考虑使用句柄进行恢复

root@CNISORCLSVR # uname -a
SunOS CNISORCLSVR 5.9 Generic_112233-08 sun4u sparc SUNW,Sun-Fire-880
root@CNISORCLSVR # ps -ef|grep lgwr   
  oracle   597     1  0   Mar 05 ?       17:14 ora_lgwr_xifenfei
    root 28069 28043  0 18:51:17 pts/2    0:00 grep lgwr
root@CNISORCLSVR # ls -ltr
total 189348454
-r--r--r--   1 oracle   dba       657920 Apr 26  2002 12
c---------   1 root     sys       13, 12 Mar 27  2004 8
c---------   1 root     sys       13, 12 Mar 27  2004 10
-rw-r-----   0 oracle   dba      34359730176 Nov 12  2013 291
-rw-r-----   0 oracle   dba      1073750016 Nov 13  2013 293
D---------   1 root     root           0 Mar  5 19:31 11
-rw-r-----   1 oracle   dba         1758 Mar  5 22:04 9
-rw-rw----   1 oracle   dba           24 Mar  5 22:04 13
s---------   0 root     root           0 Mar  8 00:45 14
-rw-r-----   1 oracle   dba      1887444992 Mar 12 03:27 289
-rw-r-----   1 oracle   dba      943726592 Mar 12 11:17 290
-rw-r-----   0 oracle   dba      4294975488 Mar 13 00:09 292
-rw-r-----   0 oracle   dba      268443648 Mar 13 01:33 288
-rw-r-----   0 oracle   dba      536879104 Mar 13 01:33 279
-rw-r-----   1 oracle   dba      134225920 Mar 13 01:33 278
-rw-r-----   0 oracle   dba      134225920 Mar 13 01:33 269
-rw-r-----   1 oracle   dba      268443648 Mar 13 01:33 267
-rw-r-----   1 oracle   dba      148119552 Mar 13 01:33 266
-rw-r-----   1 oracle   dba      10493952 Mar 13 01:33 265
-rw-r-----   1 oracle   dba      26222592 Mar 13 01:33 264
-rw-r-----   1 oracle   dba      62922752 Mar 13 01:33 263
-rw-r-----   1 oracle   dba      20979712 Mar 13 01:33 262
-rw-r-----   0 oracle   dba      134225920 Mar 13 01:33 287
-rw-r-----   1 oracle   dba      209723392 Mar 13 01:33 285
-rw-r-----   0 oracle   dba      536879104 Mar 13 01:33 283
-rw-r-----   1 oracle   dba      67117056 Mar 13 01:33 282
-rw-r-----   0 oracle   dba      536879104 Mar 13 01:33 281
-rw-r-----   0 oracle   dba      536879104 Mar 13 01:33 280
-rw-r-----   0 oracle   dba      536879104 Mar 13 01:33 276
-rw-r-----   0 oracle   dba      1073750016 Mar 13 01:33 275
-rw-r-----   0 oracle   dba      2214600704 Mar 13 01:33 274
-rw-r-----   0 oracle   dba      134225920 Mar 13 01:33 273
-rw-r-----   0 oracle   dba      536879104 Mar 13 01:33 272
c---------   1 root     sys       13,  2 Mar 13 02:00 5
c---------   1 root     sys       13,  2 Mar 13 02:00 4
c---------   1 root     sys       13,  2 Mar 13 02:00 3
c---------   1 root     sys       13,  2 Mar 13 02:00 2
c---------   1 root     sys       13,  2 Mar 13 02:00 1
c---------   1 root     sys       13,  2 Mar 13 02:00 0
--w-------   1 oracle   dba      4640842 Mar 13 04:43 7
--w-------   1 oracle   dba      4640842 Mar 13 04:43 6
-rw-r-----   0 oracle   dba      1207967744 Mar 13 18:21 271
-rw-r-----   0 oracle   dba      15929974784 Mar 13 18:39 284
-rw-r-----   0 oracle   dba      134225920 Mar 13 18:45 277
-rw-r-----   0 oracle   dba      2122326016 Mar 13 18:46 286
-rw-r-----   0 oracle   dba      9261031424 Mar 13 18:47 270
-rw-r-----   0 oracle   dba      18253619200 Mar 13 18:47 268
-rw-r-----   1 oracle   dba      134225920 Mar 13 18:51 261
-rw-r-----   1 oracle   dba      524296192 Mar 13 18:51 260
-rw-r-----   1 oracle   dba      104858112 Mar 13 18:52 259
-rw-r-----   1 oracle   dba      1941504 Mar 13 18:52 258
-rw-r-----   1 oracle   dba      1941504 Mar 13 18:52 257
-rw-r-----   1 oracle   dba      1941504 Mar 13 18:52 256

SQL> select file#,name from v$datafile wehre name like '/disk%';

     FILE# NAME
---------- --------------------------------------------------
         9 /disk/oradata/xifenfei/xifenfei.dbf
        10 /disk/oradata/xifenfei/CSSN.dbf
        11 /disk/oradata/xifenfei/NCSSN.dbf
        12 /disk/oradata/xifenfei/CSIC_RDS.dbf
        13 /disk/oradata/xifenfei/CSIC_CSSN.dbf
        14 /disk/oradata/xifenfei/CNIS_I.dbf
        15 /disk/oradata/xifenfei/CNIS.dbf
        16 /disk/oradata/xifenfei/TRSWCM6_CSSN.dbf
        17 /disk/oradata/xifenfei/TRSWCM6_CSSN_PLUGINS.dbf
        18 /disk/oradata/xifenfei/DIGIREF.dbf
        20 /disk/oradata/xifenfei/TRSWCM.dbf
        21 /disk/oradata/xifenfei/TRSWCM52_NSLC.dbf
        22 /disk/oradata/xifenfei/TRSWCM52_PLUGINS_NSLC.dbf
        24 /disk/oradata/xifenfei/TRSWCM_PLUGINS.dbf
        25 /disk/oradata/xifenfei/CNIS_ALL.dbf
        27 /disk/oradata/xifenfei/undotbs01.dbf
        28 /disk/oradata/xifenfei/TRS_IDS02.dbf
        29 /disk/oradata/xifenfei/xdb02.dbf

在solaris中比较郁闷,虽然进入了fd目录,但是无法知道哪些文件句柄是删除,哪些是正常的,因此没有办法,只能使用lsof进一步分析

root@CNISORCLSVR # pkgadd -d lsof-4.80-sol9-sparc-local

The following packages are available:
  1  IBMlsof     lsof
                 (sparc) 4.80

Select package(s) you wish to process (or 'all' to process
all packages). (default: all) [?,??,q]: all

Processing package instance <IBMlsof> from </tmp/lsof-4.80-sol9-sparc-local>

lsof
(sparc) 4.80
Vic Abell
Using </usr/local> as the package base directory.
## Processing package information.
## Processing system information.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.

The following files are already installed on the system and are being
used by another package:
* /usr/local/bin <attribute change only>

* - conflict with a file which does not belong to any package.

Do you want to install these conflicting files [y,n,?,q] y
## Checking for setuid/setgid programs.

The following files are being installed with setuid and/or setgid
permissions:
 /usr/local/bin/lsof <setgid sys>
 /usr/local/bin/sparcv7/lsof <setgid sys>
 /usr/local/bin/sparcv9/lsof <setgid sys>

Do you want to install these as setuid/setgid files [y,n,?,q] y

Installing lsof as <IBMlsof>

## Installing part 1 of 1.
/usr/local/bin/lsof
/usr/local/bin/sparcv7/lsof
/usr/local/bin/sparcv9/lsof
/usr/local/doc/lsof/00.README.FIRST
/usr/local/doc/lsof/00CREDITS
/usr/local/doc/lsof/00DCACHE
/usr/local/doc/lsof/00DIALECTS
/usr/local/doc/lsof/00DIST
/usr/local/doc/lsof/00FAQ
/usr/local/doc/lsof/00LSOF-L
/usr/local/doc/lsof/00MANIFEST
/usr/local/doc/lsof/00PORTING
/usr/local/doc/lsof/00QUICKSTART
/usr/local/doc/lsof/00README
/usr/local/doc/lsof/00TEST
/usr/local/doc/lsof/00XCONFIG
/usr/local/doc/lsof/lsof.man
/usr/local/man/man8/lsof.8
[ verifying class <none> ]

Installation of <IBMlsof> was successful.

root@CNISORCLSVR # ./lsof -p 597
COMMAND PID   USER   FD   TYPE        DEVICE    SIZE/OFF   NODE NAME
oracle  597 oracle  cwd   VDIR          85,5        2048 106299 /export/home/oracle/app/product/9.2.0/dbs
oracle  597 oracle  txt   VREG          85,5    61272672   2332 /export/home/oracle/app/product/9.2.0/bin/oracle
…………
oracle  597 oracle  260u  VREG          85,5   524296192 106517 /export/home/oracle/oradata/xifenfei/system01.dbf
oracle  597 oracle  261u  VREG          85,5   134225920 106518 /export/home/oracle/oradata/xifenfei/undotbs01.dbf
…………
oracle  597 oracle  268u  VREG        118,70 18253619200  109 /disk (/dev/dsk/c2t600A0B800029CEFA0000036C491B270Bd0s6)
oracle  597 oracle  269u  VREG        118,70   134225920  110 /disk (/dev/dsk/c2t600A0B800029CEFA0000036C491B270Bd0s6)
oracle  597 oracle  270u  VREG        118,70  9261031424  111 /disk (/dev/dsk/c2t600A0B800029CEFA0000036C491B270Bd0s6)
…………
oracle  597 oracle  293u  VREG        118,70  1073750016   14 /disk (/dev/dsk/c2t600A0B800029CEFA0000036C491B270Bd0s6)

到这一步,基本上定位/disk部分是我们需要恢复的数据,从而可以定位到句柄,然后结合数据文件信息,直接使用cp命令恢复出来文件.然后数据库层面recover并且online.

cd /proc/597/fd
cp 269 /disk/oradata/cnisora2/CSSN.dbf
chown oracle:dba /disk/oradata/xifenfei/CSSN.dbf

SQL> recover datafile 10;
ORA-00283: 恢复会话因错误而取消
ORA-01124: 无法恢复数据文件 10 - 文件在使用中或在恢复中
ORA-01110: 数据文件 10: '/disk/oradata/xifenfei/CSSN.dbf'


SQL> alter database datafile 10 offline;

数据库已更改。

SQL> recover datafile 10;
完成介质恢复。
SQL> alter database datafile 10 online;

数据库已更改。

至此基本上恢复完成,万幸是数据库没有crash,遇到此类问题,千万不要盲目关闭数据库.另外数据库备份重于一切

rm -rf delete datafile recovery

因为人员离职闹得不愉快,系统工程师离职后,由于公司未及时关闭其vpn,数据库服务器(Linux 6.5 Oracle 11.2.0.1)帐号未及时被修改,最后直接上去rm ORACLE_BASE给干掉,悲剧的是ORADATA目录也在里面,更加悲剧的是所有数据文件都在里面.也就是说数据库彻底被删除,而且没有任何备份.朋友咨询了我,让我给予支持.最后比较幸运,文件没有被覆盖,inode都还在,通过extundelete顺利恢复所有数据文件,控制文件,redo文件(extundelete恢复Linux被删除文件),数据库顺利打开,实现0丢失,算是一次完美的恢复

[root@DB1 tmp]# tar xvf extundelete-0.2.4.tar 
extundelete-0.2.4/
extundelete-0.2.4/acinclude.m4
extundelete-0.2.4/missing
extundelete-0.2.4/autogen.sh
extundelete-0.2.4/aclocal.m4
extundelete-0.2.4/configure
extundelete-0.2.4/LICENSE
extundelete-0.2.4/README
extundelete-0.2.4/install-sh
extundelete-0.2.4/config.h.in
extundelete-0.2.4/src/
extundelete-0.2.4/src/extundelete.cc
extundelete-0.2.4/src/block.h
extundelete-0.2.4/src/kernel-jbd.h
extundelete-0.2.4/src/insertionops.cc
extundelete-0.2.4/src/block.c
extundelete-0.2.4/src/cli.cc
extundelete-0.2.4/src/extundelete-priv.h
extundelete-0.2.4/src/extundelete.h
extundelete-0.2.4/src/jfs_compat.h
extundelete-0.2.4/src/Makefile.in
extundelete-0.2.4/src/Makefile.am
extundelete-0.2.4/configure.ac
extundelete-0.2.4/depcomp
extundelete-0.2.4/Makefile.in
extundelete-0.2.4/Makefile.am
[root@DB1 tmp]# cd extundelete-0.2.4
[root@DB1 extundelete-0.2.4]# ./configure 
Configuring extundelete 0.2.4
Writing generated files to disk
[root@DB1 extundelete-0.2.4]# make && make install
make -s all-recursive
Making all in src
Making install in src
  /usr/bin/install -c extundelete '/usr/local/bin'
[root@DB1 extundelete-0.2.4]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3       244G   11G  221G   5% /
tmpfs            16G   72K   16G   1% /dev/shm
/dev/sda1       190M   62M  119M  35% /boot
/dev/sdb1       2.0T   71M  1.9T   1% /home
[root@DB1 extundelete-0.2.4]# umount /dev/sdb1
umount: /home: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))
[root@DB1 extundelete-0.2.4]# fuser -m -u /home
/home:                3914c(oracle)  8372c(oracle)
[root@DB1 extundelete-0.2.4]# kill -9 3914
[root@DB1 extundelete-0.2.4]# fuser -m -u /home
/home:                8372c(oracle)
[root@DB1 extundelete-0.2.4]# kill -9 8372
[root@DB1 extundelete-0.2.4]# fuser -m -u /home
[root@DB1 extundelete-0.2.4]# umount /dev/sdb1
[root@DB1 extundelete-0.2.4]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3       244G   11G  221G   5% /
tmpfs            16G   72K   16G   1% /dev/shm
/dev/sda1       190M   62M  119M  35% /boot
[root@DB1 extundelete-0.2.4]# extundelete /dev/sdb1 --restore-all
NOTICE: Extended attributes are not restored.
Loading filesystem metadata ... 16384 groups loaded.
Loading journal descriptors ... 26542 descriptors loaded.
Searching for recoverable inodes in directory / ... 
18896 recoverable inodes found.
Looking through the directory structure for deleted files ... 
2 recoverable inodes still lost.
Unable to restore inode 43778050 (file.43778050): Space has been reallocated.
[root@DB1 extundelete-0.2.4]# ls
acinclude.m4  autogen.sh  config.h.in  config.status  configure.ac  install-sh  Makefile     Makefile.in 
aclocal.m4    config.h    config.log   configure      depcomp       LICENSE     Makefile.am  missing    
[root@DB1 extundelete-0.2.4]# cd RECOVERED_FILES/
[root@DB1 RECOVERED_FILES]# ls
app  file.43778051  oracle  oraInventory
[root@DB1 RECOVERED_FILES]# cd app
[root@DB1 app]# ls
admin  cfgtoollogs  diag  oracle  oradata  orcl  ORCL
[root@DB1 app]# cd oradata
[root@DB1 oradata]# ls
orcl
[root@DB1 oradata]# cd orcl
[root@DB1 orcl]# ls
control01.ctl  redo01.log  redo02.log  redo03.log  sysaux01.dbf  system01.dbf  undotbs01.dbf  users01.dbf
[root@DB1 orcl]# ls -ltr
total 2908776
-rw-r--r--. 1 root root  734011392 Nov 18 02:06 system01.dbf
-rw-r--r--. 1 root root 1069555712 Nov 18 02:06 sysaux01.dbf
-rw-r--r--. 1 root root  120594432 Nov 18 02:06 undotbs01.dbf
-rw-r--r--. 1 root root  887365632 Nov 18 02:06 users01.dbf
-rw-r--r--. 1 root root    9748480 Nov 18 02:06 control01.ctl
-rw-r--r--. 1 root root   52429312 Nov 18 02:06 redo01.log
-rw-r--r--. 1 root root   52429312 Nov 18 02:06 redo02.log
-rw-r--r--. 1 root root   52429312 Nov 18 02:06 redo03.log
[root@DB1 orcl]# 

再次提醒各位:数据库备份重于一切,防天灾的同时还要防人灾,也希望圈子里面以后不要听到类似故障.