Missing disk space on SSD

dmpm

Member
Dec 29, 2023
77
3
8
The sda3 partition on my boot SSD is 238.98GB, but df -h shows:

/dev/mapper/pve-vm--202--disk--0 7.8G 2.4G 5.1G 32% /
/dev/mapper/pve-backup 113G 80G 28G 75% /mnt/backup

So they only add up to 120.8GB. Where's the other 118.98GB hiding?

I tried looking in /dev/mapper but that folder doesn't exist, which is confusing as df -h clearly thinks it does.
 
Let's find out. Please share
Bash:
lsblk -o+FSTYPE
pvs
lvs
It seems like this output is from inside a guest (CT?), so run those on both it and the node. Please share the guest's config too.
 
Last edited:
Oh how embarrassing! Yeah, I forgot that PBS was running under PVE. I thought it was running bare metal!

On the guest lsblk -o+FSTYPE shows:

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS FSTYPE
sda 8:0 0 223.6G 0 disk
|-sda1 8:1 0 1007K 0 part
|-sda2 8:2 0 1G 0 part
`-sda3 8:3 0 222.6G 0 part
sdb 8:16 0 465.8G 0 disk
`-sdb1 8:17 0 465.8G 0 part /mnt/sdb

sdb1 is a USB SSD.

pvs and lvs don't return anything.

On the host lsblk -o+FSTYPE shows:

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS FSTYPE
sda 8:0 0 223.6G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part /boot/efi vfat
└─sda3 8:3 0 222.6G 0 part LVM2_member
├─pve-swap 252:0 0 8G 0 lvm [SWAP] swap
├─pve-root 252:1 0 40G 0 lvm / ext4
├─pve-data_tmeta 252:2 0 1G 0 lvm
│ └─pve-data-tpool 252:4 0 58G 0 lvm
│ ├─pve-data 252:5 0 58G 1 lvm
│ ├─pve-vm--202--disk--0 252:7 0 8G 0 lvm ext4
│ ├─pve-vm--205--disk--0 252:8 0 15G 0 lvm ext4
│ └─pve-vm--204--disk--0 252:9 0 32G 0 lvm ext4
├─pve-data_tdata 252:3 0 58G 0 lvm
│ └─pve-data-tpool 252:4 0 58G 0 lvm
│ ├─pve-data 252:5 0 58G 1 lvm
│ ├─pve-vm--202--disk--0 252:7 0 8G 0 lvm ext4
│ ├─pve-vm--205--disk--0 252:8 0 15G 0 lvm ext4
│ └─pve-vm--204--disk--0 252:9 0 32G 0 lvm ext4
└─pve-backup 252:6 0 114.6G 0 lvm /mnt/backup ext4
sdb 8:16 0 465.8G 0 disk
└─sdb1 8:17 0 465.8G 0 part /mnt/sdb ext4

lvs shows

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
backup pve -wi-ao---- 114.57g
data pve twi-aotz-- <58.00g 31.57 2.20
root pve -wi-ao---- 40.00g
swap pve -wi-ao---- 8.00g
vm-202-disk-0 pve Vwi-aotz-- 8.00g data 63.10
vm-204-disk-0 pve Vwi-a-tz-- 32.00g data 9.37
vm-205-disk-0 pve Vwi-aotz-- 15.00g data 68.41

and pvs shows:

PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- <222.57g 0

So my PBS CT (202) is only using 8GB for the root with 114.6GB allocated to pve-backup for the images, but my other two CTs are using 47GB, pve-root is using 40GB, and pve-swap is using 8GB, so that's an extra 95GB.

I'm not sure why the PVE and PBS GUIs say that sda3 is 238.98GB, when the shell for both says its only 222.6GB, but that's another 16GB and added to the 95GB it totals 117GB, which is roughly what I thought was missing from looking at lsblk in the guest.

I was looking at this because I need 10GB free on the PBS root to update it to v4 and I only have 5GB free, so I'll have to see if I can shrink the other two CTs to free up 5GB. I guess I could move the backup store to the USB SSD but I'm not sure if that would cause any issues.
 
I need the CT config too (pct config 202) to fully make sense of it. That said I upgraded my PBS CT with just 3GB free and it worked fine.
Can't you just go to node > 202 > Resources, select its root disk and then resize it via Volume Action > Resize?
1754697062709.png
I would have recommended formatting sdb1 with something different than ext4. ZFS for example. You probably can't do snapshots of the CT due to that, right?

I'm not sure why the PVE and PBS GUIs say that sda3 is 238.98GB, when the shell for both says its only 222.6GB
GB vs GiB. Different units.

I just wanted to mention that your sda3 PV has no unallocated space. This is pretty dangerous and not the default. If your thin pol ever gets to 100% you cannot really easily save yourself. Unfortunately I can't think of a good way to fix this without re-creating it as you can't shrink a thin pool.
 
Last edited:
  • Like
Reactions: dmpm
pct config 202 shows:

arch: amd64
cores: 2
features: mount=nfs;cifs,nesting=1
hostname: pbs
memory: 4096
mp0: /mnt/backup,mp=/mnt/backup
mp1: /mnt/sdb,mp=/mnt/sdb
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.10.18.1,hwaddr=BC:24:11:3A:F4:53,ip=10.10.18.202/24,type=veth
onboot: 1
ostype: debian
rootfs: local-lvm:vm-202-disk-0,size=8G
swap: 512

I don't think I can resize the root disk for this CT when there's no free space on the drive. Even if it were able to automatically shrink pve-backup to free up some space, I can't really afford to shrink that as it's 75% full and I need to leave some headroom for the backups. If the upgrade worked for you with 3GB free though it should work with my 5GB free, so I'll try it.

I've currently got backups off the datastores on the USB drive (named the same as the main backups but with USB added to the end) which I created by using a Pull Sync Job, so PBS doesn't seem to mind that it's formatted as ext4, although I guess it's possible that it won't be able to create the snapshots directly on the USB drive whilst it's ext4. I think I formatted it as ext4 as I needed to move it between two machines and that was easier with ext4 rather than ZFS.
 
I see, these are mount bind mounts. These prevent snapshots anyways (workaround) and cannot be resized by the GUI like that.
The idea was that you resize the rootfs: local-lvm:vm-202-disk-0,size=8G part. This is what's mounted at / (also called root directory) inside the CT.
local-lvm is backed by the data volume/thin pool which for you is only used 31.57%. I'm very confident this will work just fine.
To see what I mean with backed by check cat /etc/pve/storage.cfg.
Please note that names like sdb are temporary. Do not rely on them to stay the same across boots.
 
Last edited:
  • Like
Reactions: dmpm
Ah, yeah I guess I can't do snapshots but I don't think I've ever felt the need to do that, and the PBS backups work so I'm content with that for now.

Yeah, the 58GB data volume is only using 31.57% at the moment, which is mainly because the 32GB vm-204 disk is only using 9.38% of that disk. I created that VM to run a Radix test node, which I'm not running at the moment, but hopefully I will in future and I think it will need all of that 32GB, so I don't want to shrink that disk. Could I even expand the vm-202 disk without first shrinking the vm-204 disk? If I did, what would happen when the vm-204 machine wants to use all of the 32GB disk space assigned to it?

Anyway, the PBS upgrade worked fine with 5GB free space. I think it said it needed about 800MB when I did it, but I now have 3.2GB free, so it used about 1.8GB.
 
Storage over-provisioning has some risks. If the guests try to use more than you really have it can cause corruption for example.
 
Last edited:
  • Like
Reactions: dmpm