Let me start by saying, I realise this is probably a dead simple issue for someone better versed in linux data systems than me, so I'm hoping someone can identify what piece of the knowledge puzzle I'm missing.
This morning an automated backup failed due to disk space
Checking the device, I see a pool named local-lvm which shows 32.58 of 32.58 gigs used, however the content tab shows no contents.
The node -> Disks -> LVM-thin page shows a pool named 'data' which is 32.58 of 32.58, so I ASSUME this is the same thing.
So the problem I appear to be trying to solve is: What is local-lvm/data and what is on it. Googling points me at lvs:
Ok, I suspect that data/pve is the lvm I'm looking for. The wiki tells me: "Starting from version 4.2, the logical volume “data” is a LVM-thin pool, used to store block based guest images, and /var/lib/vz is simply a directory on the root file system.". /var/lib/vz tracks with the location of the failed file write in the backup log.
However, that folder does not appear to contain 32 gigs of data
I have checked all containers, none are configured to use local-lvm
So in summary, I have an lvmthin 'local-lvm', pointing at a thinpool called 'data', which is mounted at '/var/lib/vz' (Which is using 1.5-ish gigs) while 'data' apparently has 32-ish gigs used. Clearly there is something about this that I'm missing or don't understand, but I seem to be in an "I don't know what it is I don't know" type situation.
# pveversion --verbose
proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve)
pve-manager: 6.0-5 (running version: 6.0-5/f8a710d7)
pve-kernel-5.0: 6.0-6
pve-kernel-helper: 6.0-6
pve-kernel-4.15: 5.2-7
pve-kernel-5.0.18-1-pve: 5.0.18-1
pve-kernel-5.0.15-1-pve: 5.0.15-1
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.17-1-pve: 4.15.17-9
ceph-fuse: 12.2.11+dfsg1-2.1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-3
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-6
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-5
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-6
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-6
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1
proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve)
pve-manager: 6.0-5 (running version: 6.0-5/f8a710d7)
pve-kernel-5.0: 6.0-6
pve-kernel-helper: 6.0-6
pve-kernel-4.15: 5.2-7
pve-kernel-5.0.18-1-pve: 5.0.18-1
pve-kernel-5.0.15-1-pve: 5.0.15-1
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.17-1-pve: 4.15.17-9
ceph-fuse: 12.2.11+dfsg1-2.1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-3
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-6
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-5
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-6
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-6
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1
This morning an automated backup failed due to disk space
Code:
117: 2019-11-12 07:41:35 INFO: CT Name: web
117: 2019-11-12 07:41:35 INFO: starting first sync /proc/9004/root// to /var/lib/vz/tmp/vzdumptmp4735
117: 2019-11-12 07:50:33 INFO: rsync: write failed on "/var/lib/vz/tmp/vzdumptmp4735/mnt/webbackup.orig/19/08/16/cream.******.org.uk.tar.gz": No space left on device (28)
117: 2019-11-12 07:50:33 INFO: rsync error: error in file IO (code 11) at receiver.c(374) [receiver=3.1.3]
Checking the device, I see a pool named local-lvm which shows 32.58 of 32.58 gigs used, however the content tab shows no contents.
The node -> Disks -> LVM-thin page shows a pool named 'data' which is 32.58 of 32.58, so I ASSUME this is the same thing.
So the problem I appear to be trying to solve is: What is local-lvm/data and what is on it. Googling points me at lvs:
Code:
root@proxmox1:/var/lib/vz# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotzD- <32.58g 100.00 3.14
root pve -wi-ao---- 17.00g
swap pve -wi-ao---- 8.00g
vz pve Vwi-a-tz-- 136.00g data 23.95
Ok, I suspect that data/pve is the lvm I'm looking for. The wiki tells me: "Starting from version 4.2, the logical volume “data” is a LVM-thin pool, used to store block based guest images, and /var/lib/vz is simply a directory on the root file system.". /var/lib/vz tracks with the location of the failed file write in the backup log.
However, that folder does not appear to contain 32 gigs of data
Code:
root@proxmox1:/var/lib/vz# du -h
4.0K./images
4.0K./template/iso
411M./template/cache
4.0K./template/qemu
411M./template
4.0K./tmp
4.0K./dump
411M.
I have checked all containers, none are configured to use local-lvm
So in summary, I have an lvmthin 'local-lvm', pointing at a thinpool called 'data', which is mounted at '/var/lib/vz' (Which is using 1.5-ish gigs) while 'data' apparently has 32-ish gigs used. Clearly there is something about this that I'm missing or don't understand, but I seem to be in an "I don't know what it is I don't know" type situation.