How to get free space from mounted drive Redhat 7

In our VM infrastructure, we have clustered hosts going to a SAN.

What i am trying to figure out is how much "white space" is left over when deleting files within our Redhat servers. On our Windows server we use sdelete and that clears up that problem, however with Linux i am struggle to find a solution.

I am defining "white space" as the sectors? left over that are not zeroed out that SSD drives have to first zero out before they can write to it.

One thing I will point out is when it comes to Linux I know enough to be dangerous but am not a super user.

Looking over the drives and partitions:

[root@rhserver1-DATA10 /]# fdisk -l

Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0005d52e

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1026047      512000   83  Linux
/dev/sda2         1026048   104857599    51915776   8e  Linux LVM

Disk /dev/sdb: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/rhel_rhserver1--data10-root: 51.0 GB, 50964987904 bytes, 99540992 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/rhel_rhserver1--data10-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Now looking at the disk usage:

[root@rhserver1-DATA10 /]# df -h
Filesystem                              Size  Used Avail Use% Mounted on
/dev/mapper/rhel_rhserver1--data10-root   48G  6.1G   42G  13% /
devtmpfs                                906M     0  906M   0% /dev
tmpfs                                   921M  340K  920M   1% /dev/shm
tmpfs                                   921M   90M  831M  10% /run
tmpfs                                   921M     0  921M   0% /sys/fs/cgroup
/dev/sdb                                 50G  3.5G   44G   8% /ACMS01Backup
/dev/sda1                               497M  210M  288M  43% /boot
tmpfs                                   185M   20K  185M   1% /run/user/1000
tmpfs                                   185M     0  185M   0% /run/user/1002

After many hours of googling i found this, I think it is showing me how much "white space" is available to be cleared up.

[root@rhserver1-DATA10 /]#  parted /dev/sda unit MB print free | grep 'Free Space' | tail -n1 | awk '{print $3}'
1.02MB
[root@rhserver1-DATA10 /]#  parted /dev/sda unit '%' print free | grep 'Free Space' | tail -n1 | awk '{print $3}'
0.00%

I think a reasonable output for a 497M partition.

So now i want to do the same thing only on my mounted drive (i think its mounted.)

 parted /dev/mapper/rhel_rhserver1--data10-root unit MB print free | grep 'Free Space' | tail -n1 | awk '{print $3}'
 parted /dev/mapper/rhel_rhserver1--data10-root unit '%' print free | grep 'Free Space' | tail -n1 | awk '{print $3}'

Which give me nothing.

My /etc/fstab/:

[root@rhserver1-DATA10 /]# cat /etc/fstab
/dev/mapper/rhel_rhserver1--data10-root /                       xfs     defaults        0 0
UUID=2f97a17c-a6d5-4904-ad5c-7c16b4510201 /boot                   xfs     defaults        0 0
/dev/mapper/rhel_rhserver1--data10-swap swap                    swap    defaults        0 0
/dev/disk/by-uuid/be4c45cf-5d72-4b97-b647-2e585947041f /ACMS01Backup auto nosuid,nodev,nofail,x-gvfs-show 0 0

So my question is am I on the right path?

Did i explain what I am looking for well?

Is there a term for "white space" that might help my googling?

I have found that i can run "fstrim -v /" on the root but I would really like to know how much space is there.

Also i am trying to figure out being that theses are production system is fstrim I/O intensive, should it be run in off peak hours?

Any chance of data loss running "fstrim -v /"?


Being able to run fstrim on the / partitions would be the best solution however with they way your ESXi is configured it would not be possible.

You need to be able to enable discards on both the VM and the storage device.

Trying to reduce to size of a partition or logical volume with the xfs filesystem cannot be done this is a known bug with fedora. If you are interested in this functionality please contact Red Hat support and reference Red Hat bugzilla 1062667, and provide your use-case for needing XFS reduction / shrinking.

As a possible work around in some environments, thin provisioned LVM volumes can be considered as an additional layer below the XFS file system.

If the VM's are eager thick provisioned VMDK, which means that there is nothing to reclaim when you are attempting to trim (technically speaking; SCSI UNMAP) your volumes.

If the back-end storage is running thin provisioning then you also need to use lazy zeroed VMDK files in order to reduce the storage and make it possible for the backend to cache/dedup the warm data.

Two possible options:

  1. When storage is provided by a remote server across a SAN, you can only discard blocks if the storage is thin provisioned.

    1. VMotion all the VM's to a different data store and use the built-in VMWare tools
    2. Connect to the ESXi Host with SSH
    3. Navigate to the Virtual Machine Folder
    4. Verify disk usage with du
    5. Run vmkfstools -K [disk]
    6. Verify disk usage with du
  2. dd if=/dev/zero of=BIGFILE bs=1024000 rm -f BIGFILE

From what I can tell this does the same thing as sdelete however it can cause a spike in disk I/O as well as take a while to run.

Something to try overnight

Either option is not the best but reformatting every VM to get ext3 or ext4 does not sound feasible.

What you might be able to do is setup an affinity rule for all linux VM’s and use option 1 from above.


I try to do the same thing couple week ago and I don't find how to. I share the official statement on the Redhat support portal.

It is currently not possible to reduce the size of a partition or logical volume with the xfs filesystem. If you are interested in this functionality please contact Red Hat support and reference Red Hat bugzilla 1062667, and provide your use-case for needing XFS reduction / shrinking. As a possible workaround in some environments, thin provisioned LVM volumes can be considered as an additional layer below the XFS filesystem.

Good luck!!