lvm缩小xfs文件系统空间和对swap进行扩容操作

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:lvm缩小xfs文件系统空间和对swap进行扩容操作

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

xfs文件系统lvm缩小空间操作(/home从100G减小到80G)

[root@xifenfei ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root  449G  6.0G  443G   2% /
devtmpfs                63G     0   63G   0% /dev
tmpfs                   63G     0   63G   0% /dev/shm
tmpfs                   63G   20M   63G   1% /run
tmpfs                   63G     0   63G   0% /sys/fs/cgroup
/dev/mapper/rhel-home  100G   38M  100G   1% /home
/dev/sda2             1014M  165M  850M  17% /boot
/dev/sda1              200M  9.8M  191M   5% /boot/efi
tmpfs                   13G  4.0K   13G   1% /run/user/42
tmpfs                   13G   32K   13G   1% /run/user/0
/dev/sr0               4.2G  4.2G     0 100% /media

[root@xifenfei u01]# xfsdump -f /home.xfsdump /home
xfsdump: using file dump (drive_simple) strategy
xfsdump: version 3.1.7 (dump format 3.0) - type ^C for status and control

 ============================= dump label dialog ==============================

please enter label for this dump session (timeout in 300 sec)
 -> home
session label entered: "tar czvf /home.tar.gz /home
home"

 --------------------------------- end dialog ---------------------------------

xfsdump: level 0 dump of xifenfei:/home
xfsdump: dump date: Fri Jun 25 11:37:13 2021
xfsdump: session id: 4d75008e-9927-417d-9722-52d13bb89eb0
xfsdump: session label: 
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 4828224 bytes
xfsdump: /var/lib/xfsdump/inventory created

 ============================= media label dialog =============================

please enter label for media in drive 0 (timeout in 300 sec)
 -> home
media label entered: "home"

 --------------------------------- end dialog ---------------------------------

xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsdump: dumping directories
xfsdump: dumping non-directory files
xfsdump: ending media file
xfsdump: media file size 4732672 bytes
xfsdump: dump size (non-dir files) : 4588480 bytes
xfsdump: dump complete: 4 seconds elapsed
xfsdump: Dump Summary:
xfsdump:   stream 0 /home.xfsdump OK (success)
xfsdump: Dump Status: SUCCESS

[root@xifenfei u01]# umount /home
[root@xifenfei u01]# lvreduce -L 80G /dev/mapper/rhel-home
  WARNING: Reducing active logical volume to 80.00 GiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce rhel/home? [y/n]: y
  Size of logical volume rhel/home changed from 100.00 GiB (25600 extents) to 80.00 GiB (20480 extents).
  Logical volume rhel/home successfully resized.

[root@xifenfei u01]# mkfs.xfs -f /dev/mapper/rhel-home
meta-data=/dev/mapper/rhel-home  isize=512    agcount=16, agsize=1310720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=20971520, imaxpct=25
         =                       sunit=64     swidth=64 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=10240, version=2
         =                       sectsz=512   sunit=64 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@xifenfei u01]# mount /home
xfsrestore -f /home.xfsdump /home
[root@xifenfei u01]# xfsrestore -f /home.xfsdump /home
xfsrestore: using file dump (drive_simple) strategy
xfsrestore: version 3.1.7 (dump format 3.0) - type ^C for status and control
xfsrestore: searching media for dump
xfsrestore: examining media file 0
xfsrestore: dump description: 
xfsrestore: hostname: xifenfei
xfsrestore: mount point: /home
xfsrestore: volume: /dev/mapper/rhel-home
xfsrestore: session time: Fri Jun 25 11:37:13 2021
xfsrestore: level: 0
xfsrestore: session label: "tar czvf /home.tar.gz /home
home"
xfsrestore: media label: "home"
xfsrestore: file system id: b996cff9-332b-4c07-96e1-8335a1f23627
xfsrestore: session id: 4d75008e-9927-417d-9722-52d13bb89eb0
xfsrestore: media id: 6094b9b5-a45f-4638-a0e2-c1b982ead67b
xfsrestore: using online session inventory
xfsrestore: searching media for directory dump
xfsrestore: reading directories
xfsrestore: 119 directories and 188 entries processed
xfsrestore: directory post-processing
xfsrestore: restoring non-directory files
xfsrestore: restore complete: 0 seconds elapsed
xfsrestore: Restore Summary:
xfsrestore:   stream 0 /home.xfsdump OK (success)
xfsrestore: Restore Status: SUCCESS
[root@xifenfei u01]# df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root  449G   14G  435G   4% /
devtmpfs                63G     0   63G   0% /dev
tmpfs                   63G   20M   63G   1% /run
tmpfs                   63G     0   63G   0% /sys/fs/cgroup
/dev/sda2             1014M  165M  850M  17% /boot
/dev/sda1              200M  9.8M  191M   5% /boot/efi
tmpfs                   13G  4.0K   13G   1% /run/user/42
tmpfs                   13G   28K   13G   1% /run/user/0
/dev/sr0               4.2G  4.2G     0 100% /media
tmpfs                   63G     0   63G   0% /dev/shm
/dev/mapper/rhel-home   80G   38M   80G   1% /home

xfs系统的lvm无法直接缩小空间,只能是通过xfsdump /home内容,然后lvm缩小空间重做xfs文件系统,再使用xfsdump还原

lvm扩容swap空间(swap从8G扩大到16G)

[root@xifenfei home]# free -m
              total        used        free      shared  buff/cache   available
Mem:         128355       86907       26110         274       15338       37632
Swap:         8192           0        8192
[root@xifenfei home]# lvextend -L 16GB /dev/rhel/swap
  Size of logical volume rhel/swap changed from 8.00 GiB (2048 extents) to 16.00 GiB (4096 extents).
  Logical volume rhel/swap successfully resized.
[root@xifenfei home]# sync;sync
[root@xifenfei home]# swapoff /dev/rhel/swap
mkswap /dev/rhel/swap 
[root@xifenfei home]# mkswap /dev/rhel/swap 
mkswap: /dev/rhel/swap: warning: wiping old swap signature.
swapon /dev/rhel/swap Setting up swapspace version 1, size = 16777212 KiB
no label, UUID=8d79ccf4-1796-49c9-968d-23abb67bc6eb
[root@xifenfei home]# swapon /dev/rhel/swap 
[root@xifenfei home]# free -m
              total        used        free      shared  buff/cache   available
Mem:         128355       86907       26110         274       15338       37632
Swap:         16383           0       16383

ext4 lvm在线扩容

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:ext4 lvm在线扩容

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

linux扫描新磁盘

[root@xifenfei ~]#  ls /sys/class/scsi_host/
host0  host1  host2
[root@xifenfei ~]# echo '- - -'  > /sys/class/scsi_host/host0/scan
[root@xifenfei ~]# echo '- - -'  > /sys/class/scsi_host/host1/scan
[root@xifenfei ~]# echo '- - -'  > /sys/class/scsi_host/host2/scan

vg扩容

[root@xifenfei ~]# pvcreate /dev/sdc1
  Physical volume "/dev/sdc1" successfully created
[root@xifenfei ~]# vgs
  VG            #PV #LV #SN Attr   VSize   VFree  
  vg_xifenfei   1   4   0 wz--n- 499.51g 584.00m
[root@xifenfei ~]# vgextend vg_xifenfei /dev/sdc1
  Volume group "vg_xifenfei" successfully extended
[root@xifenfei ~]# vgs
  VG            #PV #LV #SN Attr   VSize   VFree  
  vg_xifenfei   2   4   0 wz--n- 999.50g 500.56g

lv进行扩容

[root@xifenfei ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_xifenfei-lv_root
                       50G  6.4G   41G  14% /
tmpfs                  63G     0   63G   0% /dev/shm
/dev/sda1             477M   84M  364M  19% /boot
/dev/mapper/vg_xifenfei-lv_home
                      1.9G   29M  1.8G   2% /home
/dev/mapper/vg_xifenfei-lvu01
                      436G  335G   80G  81% /u01
/dev/sdb1             985G  462G  473G  50% /oracle_data
[root@xifenfei ~]# lvresize -L +500G /dev/mapper/vg_xifenfei-lvu01
  Size of logical volume vg_xifenfei/lvu01 changed from 443.00 GiB (113408 extents) to 943.00 GiB (241408 extents).
  Logical volume lvu01 successfully resized.
[root@xifenfei ~]# resize2fs /dev/mapper/vg_xifenfei-lvu01
resize2fs 1.43-WIP (20-Jun-2013)
Filesystem at /dev/mapper/vg_xifenfei-lvu01 is mounted on /u01; on-line resizing required
old_desc_blocks = 28, new_desc_blocks = 59
The filesystem on /dev/mapper/vg_xifenfei-lvu01 is now 247201792 blocks long.

[root@xifenfei ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_xifenfei-lv_root
                       50G  6.4G   41G  14% /
tmpfs                  63G     0   63G   0% /dev/shm
/dev/sda1             477M   84M  364M  19% /boot
/dev/mapper/vg_xifenfei-lv_home
                      1.9G   29M  1.8G   2% /home
/dev/mapper/vg_xifenfei-lvu01
                      929G  335G  552G  38% /u01
/dev/sdb1             985G  462G  473G  50% /oracle_data

permission.pl处理ORACLE目录权限被误操作故障

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:permission.pl处理ORACLE目录权限被误操作故障

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

在以前的案例中,多次出现由于误操作修改oracle rac相关目录权限,导致集群无法启动,以前官方给出来的解决方案,大部分情况是通过删除节点,增加解决的方式解决.在翻看最近的mos文档时发现Script to capture and restore file permission in a directory (for eg. ORACLE_HOME) (Doc ID 1515018.1),通过permission.pl来记录正常节点的权限,然后在异常节点执行(注意需要替换主机名).通过对该脚本简单测试,确认大概效果:
1. 上传脚本并且执行

[root@localhost tmp]# chmod +x permission.pl 
[root@localhost tmp]# ./permission.pl /u01

]
2. 生成对应文件

[root@localhost tmp]# ls -l *perm*
-rwxr-xr-x. 1 root root     2451 4ÔÂ  26 14:23 permission.pl
-rw-r--r--. 1 root root  6918174 4ÔÂ  27 10:56 permission-¶þ-4ÔÂ-27-10-55-19-2021
-rw-r--r--. 1 root root 13442294 4ÔÂ  27 10:56 restore-perm-¶þ-4ÔÂ-27-10-55-19-2021.cmd

3. 查看相关文件内容
permission-


restore-perm-

incaseformat 病毒删除文件恢复

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:incaseformat 病毒删除文件恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

一夜之间大量朋友反馈:电脑中除C盘之外的其他磁盘文件都被删除,且磁盘中可能被创建“incaseformat”文本文档
20210113160941
20210113161009


使用360扫描病毒如下
t01859d4dc0ddb2ba76

确认问题原因是电脑中病毒后,病毒文件通过DeleteFileA和RemoveDirectory代码实现了删除文件和目录的行为。此病毒启动后将自身复制到C:\WINDOWS\tsay.exe并创建启动项退出,等待重启运行,下次开机启动后约20s就开始删除行。发现文件不见了但空间占用还正常的,不要重启,先备份数据库。如果不小心已经重启而且无法恢复数据.请不要对该分区进行任何写操作,数据理论上绝大部分可以恢复.如果无法恢复,或者恢复出来的文件大量坏块,无法正常使用,可以联系我们进行最大限度恢复.

-bash: /bin/rm: Argument list too long

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:-bash: /bin/rm: Argument list too long

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

linux批量删除大量文件,当使用rm -rf *报-bash: /bin/rm: Argument list too long错误可以使用find+xargs搞定

[grid@xifenfei audit]$ rm -rf +ASM2_ora_1*_2017*.aud
-bash: /bin/rm: Argument list too long
[grid@xifenfei audit]$ ls|wc -l
111650450
[grid@xifenfei audit]$ find ./ -name "*.aud" |xargs rm -r
[grid@xifenfei audit]$ ls
[grid@xifenfei audit]$