[SOLVED] proxmox data thinpool full - Can't find contents?

Link

Active Member
Aug 2, 2017
5
0
41
41
Let me start by saying, I realise this is probably a dead simple issue for someone better versed in linux data systems than me, so I'm hoping someone can identify what piece of the knowledge puzzle I'm missing.

# pveversion --verbose
proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve)
pve-manager: 6.0-5 (running version: 6.0-5/f8a710d7)
pve-kernel-5.0: 6.0-6
pve-kernel-helper: 6.0-6
pve-kernel-4.15: 5.2-7
pve-kernel-5.0.18-1-pve: 5.0.18-1
pve-kernel-5.0.15-1-pve: 5.0.15-1
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.17-1-pve: 4.15.17-9
ceph-fuse: 12.2.11+dfsg1-2.1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-3
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-6
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-5
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-6
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-6
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1

This morning an automated backup failed due to disk space
Code:
117: 2019-11-12 07:41:35 INFO: CT Name: web
117: 2019-11-12 07:41:35 INFO: starting first sync /proc/9004/root// to /var/lib/vz/tmp/vzdumptmp4735
117: 2019-11-12 07:50:33 INFO: rsync: write failed on "/var/lib/vz/tmp/vzdumptmp4735/mnt/webbackup.orig/19/08/16/cream.******.org.uk.tar.gz": No space left on device (28)
117: 2019-11-12 07:50:33 INFO: rsync error: error in file IO (code 11) at receiver.c(374) [receiver=3.1.3]

Checking the device, I see a pool named local-lvm which shows 32.58 of 32.58 gigs used, however the content tab shows no contents.

The node -> Disks -> LVM-thin page shows a pool named 'data' which is 32.58 of 32.58, so I ASSUME this is the same thing.

So the problem I appear to be trying to solve is: What is local-lvm/data and what is on it. Googling points me at lvs:
Code:
root@proxmox1:/var/lib/vz# lvs
  LV            VG       Attr       LSize   Pool    Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve      twi-aotzD- <32.58g                100.00 3.14                           
  root          pve      -wi-ao----  17.00g                                                      
  swap          pve      -wi-ao----   8.00g                                                      
  vz            pve      Vwi-a-tz-- 136.00g data           23.95


Ok, I suspect that data/pve is the lvm I'm looking for. The wiki tells me: "Starting from version 4.2, the logical volume “data” is a LVM-thin pool, used to store block based guest images, and /var/lib/vz is simply a directory on the root file system.". /var/lib/vz tracks with the location of the failed file write in the backup log.

However, that folder does not appear to contain 32 gigs of data
Code:
root@proxmox1:/var/lib/vz# du -h
4.0K./images
4.0K./template/iso
411M./template/cache
4.0K./template/qemu
411M./template
4.0K./tmp
4.0K./dump
411M.

I have checked all containers, none are configured to use local-lvm

So in summary, I have an lvmthin 'local-lvm', pointing at a thinpool called 'data', which is mounted at '/var/lib/vz' (Which is using 1.5-ish gigs) while 'data' apparently has 32-ish gigs used. Clearly there is something about this that I'm missing or don't understand, but I seem to be in an "I don't know what it is I don't know" type situation.
 
Can you include the contents of /etc/pve/storage.cfg, mount, df -h, vgs, and fdisk -l please. Something suspicious about your LVM configuration.
 
  • Like
Reactions: Link
Thanks for looking at this.
Part 1 below - Part 2 is the fdisk -l, its posted, but appears to require moderator approval presumably due to either the length of it, or the repeated posting

Code:
root@proxmox1:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,vztmpl,backup

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

lvmthin: vmstore1
thinpool vmstore
vgname vmdata
content images,rootdir

nfs: vmbackup
export /mnt/Delta
path /mnt/pve/vmbackup
server 10.0.30.10
content rootdir,iso,vztmpl,backup,images
maxfiles 5
options vers=3

lvm: vmstore2
vgname vmstore2
content images,rootdir
nodes proxmox1
shared 0



Code:
root@proxmox1:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=12306700k,nr_inodes=3076675,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=2466528k,mode=755)
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=42,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=21663)
mqueue on /dev/mqueue type mqueue (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
10.0.30.10:/mnt/Delta on /mnt/pve/vmbackup type nfs (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.0.30.10,mountvers=3,mount
port=756,mountproto=udp,local_lock=none,addr=10.0.30.10)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=2466524k,mode=700)
tracefs on /sys/kernel/debug/tracing type tracefs (rw,relatime)



Code:
root@proxmox1:~#  df -h
Filesystem             Size  Used Avail Use% Mounted on
udev                    12G     0   12G   0% /dev
tmpfs                  2.4G  258M  2.2G  11% /run
/dev/mapper/pve-root    17G  4.6G   12G  29% /
tmpfs                   12G   43M   12G   1% /dev/shm
tmpfs                  5.0M     0  5.0M   0% /run/lock
tmpfs                   12G     0   12G   0% /sys/fs/cgroup
/dev/fuse               30M   24K   30M   1% /etc/pve
10.0.30.10:/mnt/Delta  2.7T  612G  2.1T  23% /mnt/pve/vmbackup
tmpfs                  2.4G     0  2.4G   0% /run/user/0



Code:
root@proxmox1:~# vgs
  VG       #PV #LV #SN Attr   VSize    VFree  
  pve        1   4   0 wz--n-  <68.08g    8.50g
  vmdata     1   2   0 wz--n- <136.70g  504.00m
  vmstore2   1  21   0 wz--n- <465.73g <151.73g
 
And part 2. Thanks for taking the time to look at this




Code:
root@proxmox1:~# fdisk -l > /tmp/text.txt
GPT PMBR size mismatch (106954751 != 109051903) will be corrected by write.
The backup GPT table is not on the end of the device. This problem will be corrected by write.

Disk /dev/loop0: 40 GiB, 42949672960 bytes, 83886080 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop1: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes




Disk /dev/sda: 68.3 GiB, 73372631040 bytes, 143305920 sectors
Disk model: LOGICAL VOLUME  
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: C057E920-EABC-4EF3-AC4A-336FF258B2CC

Device      Start       End   Sectors  Size Type
/dev/sda1    2048      4095      2048    1M BIOS boot
/dev/sda2    4096    528383    524288  256M EFI System
/dev/sda3  528384 143305886 142777503 68.1G Linux LVM


Disk /dev/sdb: 136.7 GiB, 146778685440 bytes, 286677120 sectors
Disk model: LOGICAL VOLUME  
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 32285069-C228-4C1C-849D-00A6421C1CC9

Device     Start       End   Sectors   Size Type
/dev/sdb1   2048 286677086 286675039 136.7G Linux filesystem


Disk /dev/sdc: 465.7 GiB, 500074307584 bytes, 976707632 sectors
Disk model: LOGICAL VOLUME  
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vmstore2-vm--108--disk--0: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vmstore2-vm--112--disk--0: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vmstore2-vm--123--disk--0: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vmstore2-vm--102--disk--0: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vmstore2-vm--118--disk--0: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vmstore2-vm--101--disk--0: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x90909090

Device                                      Boot Start      End  Sectors Size Id Type
/dev/mapper/vmstore2-vm--101--disk--0-part1 *       64 10485758 10485695   5G a5 FreeBSD


Disk /dev/mapper/vmstore2-vm--111--disk--0: 4 GiB, 4294967296 bytes, 8388608 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vmstore2-vm--120--disk--0: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vmstore2-vm--124--disk--0: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vmstore2-vm--122--disk--1: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vmstore2-vm--200--disk--0: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vmstore2-vm--125--disk--0: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vmdata-lvol0: 72 MiB, 75497472 bytes, 147456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-root: 17 GiB, 18253611008 bytes, 35651584 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-vz: 136 GiB, 146028888064 bytes, 285212672 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/vmstore2-vm--117--disk--0: 62 GiB, 66571993088 bytes, 130023424 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vmstore2-vm--103--disk--0: 52 GiB, 55834574848 bytes, 109051904 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 1712DF30-2769-47B8-985C-BFCDF97B2793

Device                                      Start       End   Sectors Size Type
/dev/mapper/vmstore2-vm--103--disk--0-part1  2048      4095      2048   1M BIOS boot
/dev/mapper/vmstore2-vm--103--disk--0-part2  4096 106954718 106950623  51G Linux filesystem


Disk /dev/mapper/vmstore2-vm--109--disk--0: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vmstore2-vm--104--disk--0: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vmstore2-vm--105--disk--0: 4 GiB, 4294967296 bytes, 8388608 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vmstore2-vm--121--disk--0: 40 GiB, 42949672960 bytes, 83886080 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x5a12f140

Device                                      Boot   Start      End  Sectors  Size Id Type
/dev/mapper/vmstore2-vm--121--disk--0-part1 *       2048  1187839  1185792  579M  7 HPFS/NTFS/exFAT
/dev/mapper/vmstore2-vm--121--disk--0-part2      1187840 83881983 82694144 39.4G  7 HPFS/NTFS/exFAT


Disk /dev/mapper/vmstore2-vm--107--disk--0: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vmstore2-vm--110--disk--0: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vmstore2-vm--119--disk--0: 7 GiB, 7516192768 bytes, 14680064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
 
After looking over your outputs carefully, pve/data is unrelated to your problem.

1. /var/lib/vz is a local dir residing on LV /dev/mapper/pve-root of which only 12G of disk space is available. I suspect vzdump might have cleaned up after itself when it ran out of space. This largely depends how large your web container is.

2. You seem to have set aside an NFS with the backup flag, perhaps you have web container backup configuration misconfigured?

3. You appear to have a rouge logical volume 'vz' 136G which is over provisioned that resides on your thin pool called pve/data of merely 32GBs. LV 'vz' is using 23 percent of 136G, which is roughly 32GBs. That thin pool is full.

4. Your lvs output is incomplete. You have more logical volumes than what's shown for VG pve. I actually see the root lv, nvm.
 
  • Like
Reactions: Link
After looking over your outputs carefully, pve/data is unrelated to your problem.

1. /var/lib/vz is a local dir residing on LV /dev/mapper/pve-root of which only 12G of disk space is available. I suspect vzdump might have cleaned up after itself when it ran out of space. This largely depends how large your web container is.
This makes sense. Can I ask how you determined /var/lib/vz is on LV /dev/mapper/pve-root? Is there something I've posted that shows that correlation, or is this just the standard proxmox layout?

2. You seem to have set aside an NFS with the backup flag, perhaps you have web container backup configuration misconfigured?
The NFS is indeed the destination for all backups. The backup is configured at the datacentre layer as an 'All -> vmbackup -> 6:30 daily'. Looking at the vmbackup NFS, I see the 5 previous backups it took over the last week. The backup it takes have always been larger (40+ gig) than the /var/lib/vz so I'm not sure why thats suddenly keeled over, will do some digging.

3. You appear to have a rouge logical volume 'vz' 136G which is over provisioned that resides on your thin pool called pve/data of merely 32GBs. LV 'vz' is using 23 percent of 136G, which is roughly 32GBs. That thin pool is full.
My understanding of the VG/LV linux layout isn't stellar, which is how I probably got myself in this situation in the first place. Trying to track down the what this vz is doing has led me to the conclusion that its probably not actually in use anywhere.

My chain of logic is as follows: "Path is /dev/pve/vz, which mounts to /var/lib/vz, which in turn is NOT mounted on /dev/pve/vz, thus pve/vz is not in use". Commands that got me there are as follows:

Code:
root@proxmox1:/var/lib/vz# lvdisplay pve/vz
  --- Logical volume ---
  LV Path                /dev/pve/vz
  LV Name                vz
  VG Name                pve
  LV UUID                VK6033-KLqK-BRmS-TIJa-fRgA-ldt4-SEzy8i
  LV Write Access        read/write
  LV Creation host, time proxmox1, 2018-09-25 00:08:14 +0100
  LV Pool name           data
  LV Status              available
  # open                 0
  LV Size                136.00 GiB
  Mapped size            23.95%
  Current LE             34816
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:33

root@proxmox1:/# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
/dev/pve/vz /var/lib/vz ext4 defaults 0 2

root@proxmox1:/var/lib/vz# findmnt -n -o SOURCE --target /var/lib/vz
/dev/mapper/pve-root

This then raised the question - Am I safe to lvremove pve/vz? Is there any other way of validating if it is in use/actually contains anything? I'm not currently missing anything I'm expecting to see data wise, so if I can confirm its not actively mounted anywhere, then I'm happy to delete it, but I don't want to do so and then find out there's some obscure proxmox requirement for it I wasn't aware of. Any suggestions on how to proceed on this one would be very much appreciated.

4. Your lvs output is incomplete. You have more logical volumes than what's shown for VG pve. I actually see the root lv, nvm.
Indeed, I only included what related to (what I thought was) the original problem. Full output:
Code:
 LV            VG       Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve      twi-aotzD- <32.58g             100.00 3.14                            
  root          pve      -wi-ao----  17.00g                                                    
  swap          pve      -wi-ao----   8.00g                                                    
  vz            pve      Vwi-a-tz-- 136.00g data        23.95                                  
  lvol0         vmdata   -wi-a-----  72.00m                                                    
  vmstore       vmdata   twi-aotz-- 136.00g             0.00   10.44                           
  vm-101-disk-0 vmstore2 -wi-ao----   5.00g                                                    
  vm-102-disk-0 vmstore2 -wi-ao----  20.00g                                                    
  vm-103-disk-0 vmstore2 -wi-ao----  52.00g                                                    
  vm-104-disk-0 vmstore2 -wi-ao----   8.00g                                                    
  vm-105-disk-0 vmstore2 -wi-ao----   4.00g                                                    
  vm-107-disk-0 vmstore2 -wi-ao----   8.00g                                                    
  vm-108-disk-0 vmstore2 -wi-a-----   8.00g                                                    
  vm-109-disk-0 vmstore2 -wi-ao----  16.00g                                                    
  vm-110-disk-0 vmstore2 -wi-ao----   8.00g                                                    
  vm-111-disk-0 vmstore2 -wi-ao----   4.00g                                                    
  vm-112-disk-0 vmstore2 -wi-a-----   8.00g                                                    
  vm-117-disk-0 vmstore2 -wi-ao----  62.00g                                                    
  vm-118-disk-0 vmstore2 -wi-ao----  16.00g                                                    
  vm-119-disk-0 vmstore2 -wi-ao----   7.00g                                                    
  vm-120-disk-0 vmstore2 -wi-ao----   8.00g                                                    
  vm-121-disk-0 vmstore2 -wi-ao----  40.00g                                                    
  vm-122-disk-1 vmstore2 -wi-a-----   8.00g                                                    
  vm-123-disk-0 vmstore2 -wi-ao----   8.00g                                                    
  vm-124-disk-0 vmstore2 -wi-ao----   8.00g                                                    
  vm-125-disk-0 vmstore2 -wi-a-----   8.00g                                                    
  vm-200-disk-0 vmstore2 -wi-a-----   8.00g
 
This makes sense. Can I ask how you determined /var/lib/vz is on LV /dev/mapper/pve-root? Is there something I've posted that shows that correlation, or is this just the standard proxmox layout?

By looking at /etc/pve/storage.cfg and mount command. /var/lib/vz doesn't appear to be currently mounted, which will fall back to /, which is mounted as /dev/mapper/pve-root.

The NFS is indeed the destination for all backups. The backup is configured at the datacentre layer as an 'All -> vmbackup -> 6:30 daily'. Looking at the vmbackup NFS, I see the 5 previous backups it took over the last week. The backup it takes have always been larger (40+ gig) than the /var/lib/vz so I'm not sure why thats suddenly keeled over, will do some digging.

Technically, it appears to have been using /var/lib/vz as a temporary location before archiving.
My understanding of the VG/LV linux layout isn't stellar, which is how I probably got myself in this situation in the first place. Trying to track down the what this vz is doing has led me to the conclusion that its probably not actually in use anywhere.

You learn as you go.

My chain of logic is as follows: "Path is /dev/pve/vz, which mounts to /var/lib/vz, which in turn is NOT mounted on /dev/pve/vz, thus pve/vz is not in use". Commands that got me there are as follows:

Correct, according to your output.

This then raised the question - Am I safe to lvremove pve/vz? Is there any other way of validating if it is in use/actually contains anything? I'm not currently missing anything I'm expecting to see data wise, so if I can confirm its not actively mounted anywhere, then I'm happy to delete it, but I don't want to do so and then find out there's some obscure proxmox requirement for it I wasn't aware of. Any suggestions on how to proceed on this one would be very much appreciated.

I suspect at some point in the past, /dev/pve/vz was mounted and in USE, which somehow got unmounted. Which explains why it has data in there. You can continue to use /dev/pve/vz since it has more space as template/ISO storage than what you have on your root filesystem. I however would resize your LV vz to under 32GB of your thin pool size or stay on top of how much space you are using in /var/lib/vz and make sure to keep it under 32GB or risk data lost. Barring that you temporarily mount it some where, ie /mnt, and inspect the data in it to determine if there's anything to salvage or free up space. Which explains why it suddenly keeled over because, again, it somehow got unmounted, which has/had more free space. You can remove the backup flag from /var/lib/vz from /etc/pve/storage.cfg, that should hopefully stop vzdump for attempting to backup to /var/lib/vz.
 
  • Like
Reactions: Link
@"random googler with the same problem" - There is a vzdump configuration file: /etc/vzdump.conf. The tmpdir option sets the directory for backups, in my case it was set to /var/lib/vz/tmp which, as lhorace observed, is no longer mounted where it was, causing the failed backup. I recall having seen this that the reason for setting it is that backing up to an NFS without a local tmpdir gives warnings, so just something to be aware of

@lhorage - I've removed the vz LV with no ill effects (So far :) ). I'd like to thank you for taking the time to not only help me solve the problem, but for helping me to actually understand the problem
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!