How to Backup and Restore XFS filesystem in RHEL 8

Introduction

In this guide, we will be covering two types of scenario. To backup an XFS filesystem and restoring. And another one will be, How to reclaim the space from an XFS filesystem which added more than the requirement.

While the initial server setup the /u01 filesystem created with a logical volume of 100GB in size. Upon growth of the filesystem and as per the requirement it supposes to be extended with an additional 100 GB and needs to show as a total of 200 GB. But by mistake, we have added an additional 200 GB of disk So the current total size of filesystem is grown to 300 GB.

There is no use of adding 300 GB for /u01 filesystem which holds the Oracle binary Installation files. So, it’s time to reclaim an additional 100 GB space and make it round of 200 GB as per the business requirement. Hence, the XFS filesystem won’t support XFS reduce we need to plan for a backup and restore. We may have a backup team to restore the content in short span of time. However, this can be done using default xfsdump as well.

Let’s install the required package and start with our demonstration.

Installing XFS backup package

Let’s install the xfsdump and start with backup and restore.

# yum -y install xfsdump

Use Yum or DNF to install the package.

Creating Temporary Backup location

To start with backup and restore, we don’t have enough space in any mount points. The size of /u01 filesystem is around 76 GB. So, let’s add 100 GB of a new disk for the new backup mount point.

The name of the newly added 100 GB disk will be /dev/sdr.

Now let’s create a Physical volume, volume group and logical volume for the backup mount point.

$ pvcreate /dev/sdr
$ vgcreate vg_backup /dev/sdr
$ lvcreate -l 100%FREE -n lv_backup vg_backup

Output for reference

root:prod-srv-1 ~ $ pvcreate /dev/sdr
   Physical volume "/dev/sdr" successfully created.
root:prod-srv-1 ~ $ vgcreate vg_backup /dev/sdr
   Volume group "vg_backup" successfully created
root:prod-srv-1 ~ $ lvcreate -l 100%FREE -n lv_backup vg_backup
   Logical volume "lv_backup" created.
root:prod-srv-1 ~ $

Create the filesystem on newly created logical volume. By following create a directory and mount the filesystem.

$ mkfs.xfs /dev/mapper/vg_backup-lv_bacup
$ mkdir /backup
$ mount /dev/mapper/vg_backup-lv_bacup /backup/

Output for reference

root:prod-srv-1 $ mkfs.xfs /dev/mapper/vg_backup-lv_backup
 meta-data=/dev/mapper/vg_backup-lv_bacup isize=512    agcount=4, agsize=6553344 blks
          =                       sectsz=512   attr=2, projid32bit=1
          =                       crc=1        finobt=0, sparse=0
 data     =                       bsize=4096   blocks=26213376, imaxpct=25
          =                       sunit=0      swidth=0 blks
 naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
 log      =internal log           bsize=4096   blocks=12799, version=2
          =                       sectsz=512   sunit=0 blks, lazy-count=1
 realtime =none                   extsz=4096   blocks=0, rtextents=0

root:prod-srv-1 $ mkdir /backup
root:prod-srv-1 $ mount /dev/mapper/vg_backup-lv_backup /backup/

List the mounted filesystem to confirm.

root:prod-srv-1 $ df -hP
 Filesystem                      Size  Used Avail Use% Mounted on
 /dev/mapper/vg01-root           6.0G  3.1G  3.0G  51% /
 /dev/sda1                      1014M  356M  659M  36% /boot
 /dev/mapper/vg01-home           6.0G   36M  6.0G   1% /home
 /dev/mapper/vg01-tmp            6.0G   83M  6.0G   2% /tmp
 tmpfs                           786M     0  786M   0% /run/user/0
 /dev/mapper/vg_u01-lv_u01       300G   76G  225G  26% /u01
 /dev/mapper/vg_backup-lv_bacup  100G   33M  100G   1% /backup
root:prod-srv-1 $

It’s time to start the backup.

Starting with XFS filesystem BackUp

Take the backup of /u01 filesystem under newly created filesystem /backup in the name u01_backup.img. The dump of 76 GB has been completed in 3 minutes.

$ xfsdump -f /backup/u01_backup.img /u01

To proceed with dump press enter twice while prompting for label and media label, Refer below output.

root:prod-srv-1 $ xfsdump -f /backup/u01_backup.img /u01
 xfsdump: using file dump (drive_simple) strategy
 xfsdump: version 3.1.7 (dump format 3.0) - type ^C for status and control
 ============================= dump label dialog ==============================
 please enter label for this dump session (timeout in 300 sec)
 ->
 session label entered: ""
 --------------------------------- end dialog ---------------------------------
 xfsdump: WARNING: no session label specified
 xfsdump: WARNING: most recent level 0 dump was interrupted, but not resuming that dump since resume (-R) option not specified
 xfsdump: level 0 dump of prod-srv-1:/u01
 xfsdump: dump date: Sun Jan 26 09:46:25 2020
 xfsdump: session id: ce19b9e3-611b-471d-a226-c7955069326c
 xfsdump: session label: ""
 xfsdump: ino map phase 1: constructing initial dump list
 xfsdump: ino map phase 2: skipping (no pruning necessary)
 xfsdump: ino map phase 3: skipping (only one dump stream)
 xfsdump: ino map construction complete
 xfsdump: estimated dump size: 80434353856 bytes
 ============================= media label dialog =============================
 please enter label for media in drive 0 (timeout in 300 sec)
 ->
 media label entered: ""
 --------------------------------- end dialog ---------------------------------
 xfsdump: WARNING: no media label specified
 xfsdump: creating dump session media file 0 (media 0, file 0)
 xfsdump: dumping ino map
 xfsdump: dumping directories
 xfsdump: dumping non-directory files
 xfsdump: ending media file
 xfsdump: media file size 80466740032 bytes
 xfsdump: dump size (non-dir files) : 80242784808 bytes
 xfsdump: dump complete: 578 seconds elapsed
 xfsdump: Dump Summary:
 xfsdump:   stream 0 /backup/u01_backup.img OK (success)
 xfsdump: Dump Status: SUCCESS
root:prod-srv-1 $

Once completed with successful dump navigate to the backup location to confirm the backup file size with .img extension.

root:prod-srv-1  backup $ ls -lthr
 total 75G
 -rw-r-----. 1 root root 75G Jan 26 09:56 u01_backup.img
root:prod-srv-1  backup $

Once we confirm with the backup, unmount the filesystem which you required to restore.

root:prod-srv-1 ~ $ umount /u01/

We are about to follow with our second scenario. If you only need to backup and restore skip the reducing logical volume and continue with Restoring from Backup.

Reducing the Logical Volume

As we discussed in the introduction let’s start with reclaim the space from an XFS filesystem which added more than the requirement.

Reduce the logical volume from 300 GB to 200 GB. By following create the XFS filesystem on reduced logical volume with force option.

$ lvreduce -L -100G /dev/vg_u01/lv_u01
$ mkfs.xfs -f /dev/mapper/vg_u01-lv_u01
root:prod-srv-1 ~ $ lvreduce -L -100G /dev/vg_u01/lv_u01
   WARNING: Reducing active logical volume to 199.99 GiB.
   THIS MAY DESTROY YOUR DATA (filesystem etc.)
   Do you really want to reduce vg_u01/lv_u01? [y/n]: y
   Size of logical volume vg_u01/lv_u01 changed from 299.99 GiB (76798 extents) to 199.99 GiB (51198 extents).
   Logical volume vg_u01/lv_u01 successfully resized.

root:prod-srv-1  ~ $ mkfs.xfs -f /dev/mapper/vg_u01-lv_u01
 meta-data=/dev/mapper/vg_u01-lv_u01 isize=512    agcount=4, agsize=13106688 blks
          =                       sectsz=512   attr=2, projid32bit=1
          =                       crc=1        finobt=0, sparse=0
 data     =                       bsize=4096   blocks=52426752, imaxpct=25
          =                       sunit=0      swidth=0 blks
 naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
 log      =internal log           bsize=4096   blocks=25599, version=2
          =                       sectsz=512   sunit=0 blks, lazy-count=1
 realtime =none                   extsz=4096   blocks=0, rtextents=0
root:prod-srv-1  ~ $

Mount the filesystem and restore the SELinux label for the newly created mount point. If you need to have proper ownership make sure to change the same before starting with restoring the backup.

$ mount /dev/mapper/vg_u01-lv_u01 /u01/
$ restorecon -Rv /u01/
$ chown oracle:oinstall /u01/

Restoring from the Backup

Start to restore from the dump backup file /backup/u01_backup.img.

$ xfsrestore -f /backup/u01_backup.img /u01/
root:prod-srv-1  ~ $ xfsrestore -f /backup/u01_backup.img /u01/
 xfsrestore: using file dump (drive_simple) strategy
 xfsrestore: version 3.1.7 (dump format 3.0) - type ^C for status and control
 xfsrestore: searching media for dump
 xfsrestore: examining media file 0
 xfsrestore: dump description:
 xfsrestore: hostname: prod-srv-1
 xfsrestore: mount point: /u01
 xfsrestore: volume: /dev/mapper/vg_u01-lv_u01
 xfsrestore: session time: Sun Jan 26 09:46:25 2020
 xfsrestore: level: 0
 xfsrestore: session label: ""
 xfsrestore: media label: ""
 xfsrestore: file system id: 41c17873-663a-4aa3-b2ea-c0b1c14e29002
 xfsrestore: session id: ce13b9e3-611b-471d-a226-c7955069326c
 xfsrestore: media id: 0a68d132-7bc6-4f66-995e-9834c20e3314
 xfsrestore: using online session inventory
 xfsrestore: searching media for directory dump
 xfsrestore: reading directories
 xfsrestore: 11088 directories and 330080 entries processed
 xfsrestore: directory post-processing
 xfsrestore: restoring non-directory files
 xfsrestore: restore complete: 201 seconds elapsed
 xfsrestore: Restore Summary:
 xfsrestore:   stream 0 /backup/u01_backup.img OK (success)
 xfsrestore: Restore Status: SUCCESS
root:prod-srv-1  ~ $

Verify the restored mount point.

root:prod-srv-1  ~ $ df -hP /u01/
 Filesystem                 Size  Used Avail Use% Mounted on
 /dev/mapper/vg_u01-lv_u01  200G   76G  125G  38% /u01
root:prod-srv-1  ~ $

we have successfully taken an XFS dump and restored. However, we are not done with reclaiming the newly added disk and old disk used for /u01.

Reclaiming the unused Disks

Finding the currently used Disk

Let us first confirm currently which disk is used under /dev/vg_u01/lv_u01 logical volume.

$ lvs -o +devices /dev/vg_u01/lv_u01
root:prod-srv-1  ~ $ lvs -o +devices /dev/vg_u01/lv_u01
 LV     VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices
 lv_u01 vg_u01 -wi-ao---- 199.99g                             /dev/sdc1(0)
root:prod-srv-1  ~ $

From the above output, it’s clear /dev/sdc1 is used for new XFS filesystem. So let us list and remove the old disk /dev/sdb1.

Verify the Unused Disk

While reclaiming the disk we need to be very clear which disk we are about to remove. So, let’s list all the disk and confirm before starting with the reclaim process.

$ pvs
$ vgs
$ lvs

root:prod-srv-1  ~ $ pvs
   PV         VG        Fmt  Attr PSize    PFree
   /dev/sda2  vg01      lvm2 a--    61.00g    4.00m
   /dev/sda3  vg01      lvm2 a--   <37.99g  <37.99g
   /dev/sdb1  vg_u01    lvm2 a--  <100.00g <100.00g
   /dev/sdc1  vg_u01    lvm2 a--  <200.00g    4.00m
   /dev/sdr   vg_backup lvm2 a--  <100.00g       0

root:prod-srv-1  ~ $ vgs
   VG        #PV #LV #SN Attr   VSize    VFree
   vg01        2  11   0 wz--n-   98.99g  37.99g
   vg_backup   1   1   0 wz--n- <100.00g      0
   vg_u01      2   1   0 wz--n-  299.99g 100.00g

root:prod-srv-1  ~ $ lvs
   LV            VG        Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
   home          vg01      -wi-ao----    6.00g
   root          vg01      -wi-ao----    6.00g
   swap          vg01      -wi-ao----   16.00g
   tmp           vg01      -wi-ao----    6.00g
   var           vg01      -wi-ao----    6.00g
   lv_bacup      vg_backup -wi-ao---- <100.00g
   lv_u01        vg_u01    -wi-ao----  199.99g
root:prod-srv-1  ~ $

The output of pvs shows /dev/sdb1 is not used.

Reducing the Volume Group

Once confirmed, we can start to reduce the VG size by removing the old disk /dev/sdb1 from the VG vg_u01. Once again run pvs, vgs, lvs to confirm.

$ vgreduce vg_u01 /dev/sdb1
$ pvs
$ lvs
root:prod-srv-1  ~ $ vgreduce vg_u01 /dev/sdb1
   Removed "/dev/sdb1" from volume group "vg_u01"

root:prod-srv-1  ~ $ pvs
   PV         VG        Fmt  Attr PSize    PFree
   /dev/sda2  vg01      lvm2 a--    61.00g    4.00m
   /dev/sda3  vg01      lvm2 a--   <37.99g  <37.99g
   /dev/sdb1            lvm2 ---  <100.00g <100.00g
   /dev/sdc1  vg_u01    lvm2 a--  <200.00g    4.00m
   /dev/sdr   vg_backup lvm2 a--  <100.00g       0

root:prod-srv-1  ~ $ vgs
   VG        #PV #LV #SN Attr   VSize    VFree
   vg01        2  11   0 wz--n-   98.99g 37.99g
   vg_backup   1   1   0 wz--n- <100.00g     0
   vg_u01      1   1   0 wz--n- <200.00g  4.00m

root:prod-srv-1  ~ $ lvs
   LV            VG        Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
   home          vg01      -wi-ao----    6.00g
   root          vg01      -wi-ao----    6.00g
   swap          vg01      -wi-ao----   16.00g
   tmp           vg01      -wi-ao----    6.00g
   var           vg01      -wi-ao----    6.00g
   lv_backup      vg_backup -wi-ao---- <100.00g
   lv_u01        vg_u01    -wi-ao----  199.99g
root:prod-srv-1  ~ $

Then remove the /dev/sdb1 from the physical volume and delete the partition sdb1 using fdisk. Make sure to delete all the partitions and remain only with /dev/sdb.

$ pvremove /dev/sdb1

root:prod-srv-1  ~ $ pvremove /dev/sdb1
   Labels on physical volume "/dev/sdb1" successfully wiped.
root:prod-srv-1  ~ $

Remove the Temporary Backup Mount

By following remove the logical volume created for the backup mount point.

$ lvremove /dev/vg_backup/lv_backup
$ vgremove vg_backup
$ pvremove /dev/sdr

Output for reference

root:prod-srv-1  ~ $ lvremove /dev/vg_backup/lv_backup
 Do you really want to remove active logical volume vg_backup/lv_bacup? [y/n]: y
   Logical volume "lv_bacup" successfully removed

root:prod-srv-1  ~ $ vgremove vg_backup
   Volume group "vg_backup" successfully removed

root:prod-srv-1  ~ $ pvremove /dev/sdr
   Labels on physical volume "/dev/sdr" successfully wiped.
root:prod-srv-1  ~ $

Null the device name using echo command so that disk will be completely unattached from the server and free from I/O error. Else, If you have downtime, then go for a shutdown, then remove the disk and PowerON the server.

After the disk clean up or after taking a clean reboot. we should get all the mount points as shown below.

root:prod-srv-1  ~ $ pvs
   PV         VG     Fmt  Attr PSize    PFree
   /dev/sda2  vg01   lvm2 a--    61.00g   4.00m
   /dev/sda3  vg01   lvm2 a--   <37.99g <37.99g
   /dev/sdb1  vg_u01 lvm2 a--  <200.00g   4.00m

root:prod-srv-1  ~ $ vgs
   VG     #PV #LV #SN Attr   VSize    VFree
   vg01     2  11   0 wz--n-   98.99g 37.99g
   vg_u01   1   1   0 wz--n- <200.00g  4.00m

root:prod-srv-1  ~ $ lvs
   LV            VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
   home          vg01   -wi-ao----   6.00g
   root          vg01   -wi-ao----   6.00g
   swap          vg01   -wi-ao----  16.00g
   tmp           vg01   -wi-ao----   6.00g
   var           vg01   -wi-ao----   6.00g
   lv_u01        vg_u01 -wi-ao---- 199.99g
root:prod-srv-1  ~ $

Troubleshooting Error’s

If you try to reduce the XFS filesystem-based logical volume you may come across below error. To resolve this error we need to revert back the change by making the XFS filesystem size to the previous stage.

Jan 26 09:23:39 prod-srv-1 kernel: attempt to access beyond end of device
Jan 26 09:23:39 prod-srv-1 kernel: dm-11: rw=32, want=629129216, limit=209715200
Jan 26 09:23:39 prod-srv-1 kernel: XFS (dm-11): last sector read failed

That’s it we have successfully completed with two scenarios to reduce the XFS filesystem with examples. To read more about xfsdump, manual page available with more useful options and arguments.

Conclusion

To take a backup and restore XFS filesystem it’s only possible to backup and restores using xfsdump utility. Subscribe to the newsletter to keep updated with more filesystem related articles.