[SOLVED] vzdump removing too many backups (prune bug)

Klaus Schmidt

New Member
Apr 4, 2021
4
0
1
21
Hello,

I found a few days ago that most of my backups on some Proxmox hosts are gone.
It seems to be a bug on some of my Proxmox Hosts, but not all of them. vzdump is removing all backups when a specific VMID gets backed up.

Here is the backup log of one host where ID 161 is bugged:
Code:
vzdump --exclude 12350 --storage backupstore --compress zstd --mailto <removed> --all 1 --quiet 1 --mailnotification always --mode snapshot

...

160: 2021-04-02 11:21:40 INFO: Starting Backup of VM 160 (lxc)
160: 2021-04-02 11:21:40 INFO: status = running
160: 2021-04-02 11:21:40 INFO: CT Name: <removed>
160: 2021-04-02 11:21:40 INFO: including mount point rootfs ('/') in backup
160: 2021-04-02 11:21:40 INFO: mode failure - some volumes do not support snapshots
160: 2021-04-02 11:21:40 INFO: trying 'suspend' mode instead
160: 2021-04-02 11:21:40 INFO: backup mode: suspend
160: 2021-04-02 11:21:40 INFO: ionice priority: 7
160: 2021-04-02 11:21:40 INFO: CT Name: <removed>
160: 2021-04-02 11:21:40 INFO: including mount point rootfs ('/') in backup
160: 2021-04-02 11:21:40 INFO: starting first sync /proc/30876/root/ to /var/lib/vz/tmp_backup/vzdumptmp27722_160/
160: 2021-04-02 11:21:45 INFO: first sync finished - transferred 1.08G bytes in 5s
160: 2021-04-02 11:21:45 INFO: suspending guest
160: 2021-04-02 11:21:45 INFO: starting final sync /proc/30876/root/ to /var/lib/vz/tmp_backup/vzdumptmp27722_160/
160: 2021-04-02 11:21:46 INFO: final sync finished - transferred 0 bytes in 1s
160: 2021-04-02 11:21:46 INFO: resuming guest
160: 2021-04-02 11:21:46 INFO: guest is online again after 1 seconds
160: 2021-04-02 11:21:46 INFO: creating vzdump archive '/mnt/pve/backupstore/dump/vzdump-lxc-160-2021_04_02-11_21_40.tar.zst'
160: 2021-04-02 11:21:50 INFO: Total bytes written: 1123921920 (1.1GiB, 225MiB/s)
160: 2021-04-02 11:21:55 INFO: archive file size: 354MB
160: 2021-04-02 11:21:55 INFO: removing backup 'backupstore:backup/vzdump-lxc-160-2021_03_28-11_21_17.tar.zst'
160: 2021-04-02 11:21:55 INFO: Finished Backup of VM 160 (00:00:15)

161: 2021-04-02 11:21:55 INFO: Starting Backup of VM 161 (lxc)
161: 2021-04-02 11:21:55 INFO: status = running
161: 2021-04-02 11:21:55 INFO: CT Name: <removed>
161: 2021-04-02 11:21:55 INFO: including mount point rootfs ('/') in backup
161: 2021-04-02 11:21:55 INFO: mode failure - some volumes do not support snapshots
161: 2021-04-02 11:21:55 INFO: trying 'suspend' mode instead
161: 2021-04-02 11:21:55 INFO: backup mode: suspend
161: 2021-04-02 11:21:55 INFO: ionice priority: 7
161: 2021-04-02 11:21:55 INFO: CT Name: <removed>
161: 2021-04-02 11:21:55 INFO: including mount point rootfs ('/') in backup
161: 2021-04-02 11:21:55 INFO: starting first sync /proc/13231/root/ to /var/lib/vz/tmp_backup/vzdumptmp27722_161/
161: 2021-04-02 11:22:04 INFO: first sync finished - transferred 2.03G bytes in 9s
161: 2021-04-02 11:22:04 INFO: suspending guest
161: 2021-04-02 11:22:04 INFO: starting final sync /proc/13231/root/ to /var/lib/vz/tmp_backup/vzdumptmp27722_161/
161: 2021-04-02 11:22:05 INFO: final sync finished - transferred 0 bytes in 1s
161: 2021-04-02 11:22:05 INFO: resuming guest
161: 2021-04-02 11:22:05 INFO: guest is online again after 1 seconds
161: 2021-04-02 11:22:05 INFO: creating vzdump archive '/mnt/pve/backupstore/dump/vzdump-lxc-161-2021_04_02-11_21_55.tar.zst'
161: 2021-04-02 11:22:13 INFO: Total bytes written: 1798000640 (1.7GiB, 220MiB/s)
161: 2021-04-02 11:22:16 INFO: archive file size: 497MB
161: 2021-04-02 11:22:16 INFO: removing backup 'backupstore:backup/vzdump-lxc-100-2021_03_29-11_00_02.tar.zst'
161: 2021-04-02 11:22:16 INFO: removing backup 'backupstore:backup/vzdump-lxc-100-2021_03_30-11_00_01.tar.zst'
161: 2021-04-02 11:22:17 INFO: removing backup 'backupstore:backup/vzdump-lxc-100-2021_03_31-11_00_02.tar.zst'
161: 2021-04-02 11:22:17 INFO: removing backup 'backupstore:backup/vzdump-lxc-100-2021_04_01-11_00_01.tar.zst'
161: 2021-04-02 11:22:17 INFO: removing backup 'backupstore:backup/vzdump-lxc-100-2021_04_02-11_00_02.tar.zst'
161: 2021-04-02 11:22:17 INFO: removing backup 'backupstore:backup/vzdump-lxc-101-2021_03_29-11_00_22.tar.zst'
161: 2021-04-02 11:22:17 INFO: removing backup 'backupstore:backup/vzdump-lxc-101-2021_03_30-11_00_21.tar.zst'
161: 2021-04-02 11:22:17 INFO: removing backup 'backupstore:backup/vzdump-lxc-101-2021_03_31-11_00_22.tar.zst'
161: 2021-04-02 11:22:17 INFO: removing backup 'backupstore:backup/vzdump-lxc-101-2021_04_01-11_00_19.tar.zst'
161: 2021-04-02 11:22:17 INFO: removing backup 'backupstore:backup/vzdump-lxc-101-2021_04_02-11_00_18.tar.zst'
161: 2021-04-02 11:22:52 INFO: removing backup 'backupstore:backup/vzdump-lxc-158-2021_03_29-11_21_09.tar.zst'
161: 2021-04-02 11:22:52 INFO: removing backup 'backupstore:backup/vzdump-lxc-158-2021_03_30-11_22_36.tar.zst'
161: 2021-04-02 11:22:52 INFO: removing backup 'backupstore:backup/vzdump-lxc-158-2021_03_31-11_21_21.tar.zst'
161: 2021-04-02 11:22:53 INFO: removing backup 'backupstore:backup/vzdump-lxc-158-2021_04_01-11_21_32.tar.zst'
161: 2021-04-02 11:22:53 INFO: removing backup 'backupstore:backup/vzdump-lxc-159-2021_03_29-11_21_27.tar.zst'
161: 2021-04-02 11:22:53 INFO: removing backup 'backupstore:backup/vzdump-lxc-159-2021_03_30-11_22_54.tar.zst'
161: 2021-04-02 11:22:53 INFO: removing backup 'backupstore:backup/vzdump-lxc-159-2021_03_31-11_21_39.tar.zst'
161: 2021-04-02 11:22:53 INFO: removing backup 'backupstore:backup/vzdump-lxc-159-2021_04_01-11_21_51.tar.zst'
161: 2021-04-02 11:22:53 INFO: removing backup 'backupstore:backup/vzdump-lxc-160-2021_03_29-11_21_41.tar.zst'
161: 2021-04-02 11:22:53 INFO: removing backup 'backupstore:backup/vzdump-lxc-160-2021_03_30-11_23_09.tar.zst'
161: 2021-04-02 11:22:53 INFO: removing backup 'backupstore:backup/vzdump-lxc-160-2021_03_31-11_21_55.tar.zst'
161: 2021-04-02 11:22:54 INFO: removing backup 'backupstore:backup/vzdump-lxc-160-2021_04_01-11_22_09.tar.zst'
161: 2021-04-02 11:22:54 INFO: removing backup 'backupstore:backup/vzdump-lxc-161-2021_03_28-11_21_27.tar.zst'
161: 2021-04-02 11:22:54 INFO: removing backup 'backupstore:backup/vzdump-lxc-161-2021_03_29-11_21_52.tar.zst'
161: 2021-04-02 11:22:54 INFO: removing backup 'backupstore:backup/vzdump-lxc-161-2021_03_30-11_23_20.tar.zst'
161: 2021-04-02 11:22:54 INFO: removing backup 'backupstore:backup/vzdump-lxc-161-2021_03_31-11_22_07.tar.zst'
161: 2021-04-02 11:22:54 INFO: removing backup 'backupstore:backup/vzdump-lxc-161-2021_04_01-11_22_21.tar.zst'
161: 2021-04-02 11:22:54 INFO: removing backup 'backupstore:backup/vzdump-lxc-162-2021_03_28-11_22_00.tar.zst'
161: 2021-04-02 11:22:54 INFO: removing backup 'backupstore:backup/vzdump-lxc-162-2021_03_29-11_22_14.tar.zst'
161: 2021-04-02 11:22:54 INFO: removing backup 'backupstore:backup/vzdump-lxc-162-2021_03_30-11_23_42.tar.zst'
161: 2021-04-02 11:22:55 INFO: removing backup 'backupstore:backup/vzdump-lxc-162-2021_03_31-11_22_28.tar.zst'
161: 2021-04-02 11:22:55 INFO: removing backup 'backupstore:backup/vzdump-lxc-162-2021_04_01-11_22_42.tar.zst'
161: 2021-04-02 11:22:55 INFO: removing backup 'backupstore:backup/vzdump-lxc-163-2021_03_29-11_22_32.tar.zst'
161: 2021-04-02 11:22:55 INFO: removing backup 'backupstore:backup/vzdump-lxc-163-2021_03_30-11_24_01.tar.zst'
161: 2021-04-02 11:22:55 INFO: removing backup 'backupstore:backup/vzdump-lxc-163-2021_03_31-11_22_47.tar.zst'
161: 2021-04-02 11:22:55 INFO: removing backup 'backupstore:backup/vzdump-lxc-163-2021_04_01-11_23_01.tar.zst'
161: 2021-04-02 11:22:55 INFO: removing backup 'backupstore:backup/vzdump-lxc-164-2021_03_28-11_22_18.tar.zst'
161: 2021-04-02 11:22:55 INFO: removing backup 'backupstore:backup/vzdump-lxc-164-2021_03_29-11_22_45.tar.zst'
161: 2021-04-02 11:22:56 INFO: removing backup 'backupstore:backup/vzdump-lxc-164-2021_03_30-11_24_13.tar.zst'
161: 2021-04-02 11:22:56 INFO: removing backup 'backupstore:backup/vzdump-lxc-164-2021_03_31-11_23_03.tar.zst'
161: 2021-04-02 11:22:56 INFO: removing backup 'backupstore:backup/vzdump-lxc-164-2021_04_01-11_23_15.tar.zst'
161: 2021-04-02 11:22:56 INFO: removing backup 'backupstore:backup/vzdump-lxc-165-2021_03_28-11_22_35.tar.zst'
161: 2021-04-02 11:22:56 INFO: removing backup 'backupstore:backup/vzdump-lxc-165-2021_03_29-11_23_02.tar.zst'
161: 2021-04-02 11:22:56 INFO: removing backup 'backupstore:backup/vzdump-lxc-165-2021_03_30-11_24_31.tar.zst'
161: 2021-04-02 11:22:56 INFO: removing backup 'backupstore:backup/vzdump-lxc-165-2021_03_31-11_23_20.tar.zst'
161: 2021-04-02 11:22:56 INFO: removing backup 'backupstore:backup/vzdump-lxc-165-2021_04_01-11_23_33.tar.zst'
161: 2021-04-02 11:22:57 INFO: removing backup 'backupstore:backup/vzdump-lxc-166-2021_03_28-11_22_48.tar.zst'
161: 2021-04-02 11:22:57 INFO: removing backup 'backupstore:backup/vzdump-lxc-166-2021_03_29-11_23_15.tar.zst'
161: 2021-04-02 11:22:57 INFO: removing backup 'backupstore:backup/vzdump-lxc-166-2021_03_30-11_24_44.tar.zst'
161: 2021-04-02 11:22:57 INFO: removing backup 'backupstore:backup/vzdump-lxc-166-2021_03_31-11_23_35.tar.zst'
161: 2021-04-02 11:22:57 INFO: removing backup 'backupstore:backup/vzdump-lxc-166-2021_04_01-11_23_46.tar.zst'
161: 2021-04-02 11:22:57 INFO: removing backup 'backupstore:backup/vzdump-lxc-167-2021_03_28-11_23_03.tar.zst'
161: 2021-04-02 11:22:57 INFO: removing backup 'backupstore:backup/vzdump-lxc-167-2021_03_29-11_23_26.tar.zst'
161: 2021-04-02 11:22:57 INFO: removing backup 'backupstore:backup/vzdump-lxc-167-2021_03_30-11_24_56.tar.zst'
161: 2021-04-02 11:22:57 INFO: removing backup 'backupstore:backup/vzdump-lxc-167-2021_03_31-11_23_47.tar.zst'
161: 2021-04-02 11:22:57 INFO: removing backup 'backupstore:backup/vzdump-lxc-167-2021_04_01-11_23_59.tar.zst'
161: 2021-04-02 11:23:20 INFO: Finished Backup of VM 161 (00:01:25)

162: 2021-04-02 11:23:20 INFO: Starting Backup of VM 162 (lxc)
162: 2021-04-02 11:23:20 INFO: status = running
162: 2021-04-02 11:23:20 INFO: CT Name: <removed>
162: 2021-04-02 11:23:20 INFO: including mount point rootfs ('/') in backup
162: 2021-04-02 11:23:20 INFO: mode failure - some volumes do not support snapshots
162: 2021-04-02 11:23:20 INFO: trying 'suspend' mode instead
162: 2021-04-02 11:23:20 INFO: backup mode: suspend
162: 2021-04-02 11:23:20 INFO: ionice priority: 7
162: 2021-04-02 11:23:20 INFO: CT Name: <removed>
162: 2021-04-02 11:23:20 INFO: including mount point rootfs ('/') in backup
162: 2021-04-02 11:23:20 INFO: starting first sync /proc/32058/root/ to /var/lib/vz/tmp_backup/vzdumptmp27722_162/
162: 2021-04-02 11:23:27 INFO: first sync finished - transferred 1.67G bytes in 7s
162: 2021-04-02 11:23:27 INFO: suspending guest
162: 2021-04-02 11:23:27 INFO: starting final sync /proc/32058/root/ to /var/lib/vz/tmp_backup/vzdumptmp27722_162/
162: 2021-04-02 11:23:27 INFO: final sync finished - transferred 67.57M bytes in 0s
162: 2021-04-02 11:23:27 INFO: resuming guest
162: 2021-04-02 11:23:27 INFO: guest is online again after <1 seconds
162: 2021-04-02 11:23:27 INFO: creating vzdump archive '/mnt/pve/backupstore/dump/vzdump-lxc-162-2021_04_02-11_23_20.tar.zst'
162: 2021-04-02 11:23:34 INFO: Total bytes written: 1594777600 (1.5GiB, 229MiB/s)
162: 2021-04-02 11:23:37 INFO: archive file size: 338MB
162: 2021-04-02 11:23:37 INFO: Finished Backup of VM 162 (00:00:17)

Storage configuration:
Code:
nfs: backupstore
    export /export/ftpbackup/backupstore
    path /mnt/pve/backupstore
    server ftpback-...net
    content backup
    prune-backups keep-last=5

Using the most recent Proxmox version:
Code:
proxmox-ve: 6.3-1 (running kernel: 5.4.101-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-8
pve-kernel-helper: 6.3-8
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.101-1-pve: 5.4.101-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.0.21-5-pve: 5.0.21-10
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.8
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.12-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-9
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-5
pve-xtermjs: 4.7.0-3
pve-zsync: 2.0-4
qemu-server: 6.3-10
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1

The bug is even reproducible if VMID 161 gets backed up manually.
Setting the storage to keep all backups fixes the bug, but will never remove any backups.

Can someone please explain to me why it does this?
 

dylanw

Active Member
Staff member
Jul 6, 2020
201
30
28
Do you happen to have any hook-scripts on the system that would be causing weird behavior? (particularly a backup-end hook script?)

Could you post the config file for VM 161 (cat /etc/pve/qemu-server/100.conf)?
 

Fabian_E

Proxmox Staff Member
Staff member
Aug 1, 2019
928
122
43
Hi,
please check that the installed versions are up to date on all nodes. In addition to what @dylanw requested, could you also share the host-specific vzdump configuration /etc/vzdump.conf, especially for the problematic hosts?

Does it remove all backups, or is the last one kept? Could you share the full CLI output for a manual backup of 161?

The output of the following might also be helpful (this will do dry-runs):
Code:
pvesh get /nodes/<nodename>/storage/backupstore/prunebackups
pvesh get /nodes/<nodename>/storage/backupstore/prunebackups  --vmid 161
pvesh get /nodes/<nodename>/storage/backupstore/prunebackups  --vmid 160
pvesh get /nodes/<nodename>/storage/backupstore/prunebackups  --type lxc
once with <nodename> being the name of a problematic host, and once the name of a normal host.
 
  • Like
Reactions: dylanw

Klaus Schmidt

New Member
Apr 4, 2021
4
0
1
21
Thank you so much for a reply already.

Do you happen to have any hook-scripts on the system that would be causing weird behavior? (particularly a backup-end hook script?)

Could you post the config file for VM 161 (cat /etc/pve/qemu-server/100.conf)?

I don't use any hook-scripts. Here is the vzdump config, it is identical for all hosts:
Code:
# vzdump default settings

tmpdir: /var/lib/vz/tmp_backup/
#dumpdir: DIR
#storage: STORAGE_ID
#mode: snapshot|suspend|stop
#bwlimit: KBPS
#ionice: PRI
#lockwait: MINUTES
#stopwait: MINUTES
#size: MB
#stdexcludes: BOOLEAN
#mailto: ADDRESSLIST
#maxfiles: N
#script: FILENAME
#exclude-path: PATHLIST
#pigz: N

And here is the config file for CT 161:
Code:
arch: amd64
cores: 1
hostname: <removed>
memory: 1024
nameserver: <removed>
net0: name=eth0,bridge=vmbr0,gw=<removed>,gw6=<removed>,hwaddr=<removed>,ip=<removed>,ip6=<removed>,type=veth
ostype: ubuntu
rootfs: local:161/vm-161-disk-0.raw,size=5G
searchdomain: <removed>
swap: 0
unprivileged: 1
lxc.cgroup.pids.max: 500
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file


Hi,
please check that the installed versions are up to date on all nodes. In addition to what @dylanw requested, could you also share the host-specific vzdump configuration /etc/vzdump.conf, especially for the problematic hosts?

Does it remove all backups, or is the last one kept? Could you share the full CLI output for a manual backup of 161?

The output of the following might also be helpful (this will do dry-runs):
Code:
pvesh get /nodes/<nodename>/storage/backupstore/prunebackups
pvesh get /nodes/<nodename>/storage/backupstore/prunebackups  --vmid 161
pvesh get /nodes/<nodename>/storage/backupstore/prunebackups  --vmid 160
pvesh get /nodes/<nodename>/storage/backupstore/prunebackups  --type lxc
once with <nodename> being the name of a problematic host, and once the name of a normal host.

I updated the 2 new packages on the bugged host, here is pveversion output now:
Code:
proxmox-ve: 6.3-1 (running kernel: 5.4.101-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-8
pve-kernel-helper: 6.3-8
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.101-1-pve: 5.4.101-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.0.21-5-pve: 5.0.21-10
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.8
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.13-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-9
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-5
pve-xtermjs: 4.7.0-3
pve-zsync: 2.0-4
qemu-server: 6.3-10
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1

It will keep 1 backup of 5 VPS before CT 161, so the latest backup of CT 157 is kept. All before get deleted every night. I guess this is because I configured it to keep the last 5 backups in the storage configuration. Please see the attached files for the output of those commands. I think they show the bug perfectly. It wants to delete all backups for VM 161, but not other IDs.

Here is the CLI output of a manual backup of CT 160 that has not the bug:
Code:
vzdump 160 --compress zstd --node <buggedhost> --storage backupstore --mode snapshot --remove 1
INFO: starting new backup job: vzdump 160 --node <buggedhost> --remove 1 --mode snapshot --storage backupstore --compress zstd
INFO: Starting Backup of VM 160 (lxc)
INFO: Backup started at 2021-04-07 16:04:31
INFO: status = running
INFO: CT Name: <removed>
INFO: including mount point rootfs ('/') in backup
INFO: mode failure - some volumes do not support snapshots
INFO: trying 'suspend' mode instead
INFO: backup mode: suspend
INFO: ionice priority: 7
INFO: CT Name: <removed>
INFO: including mount point rootfs ('/') in backup
INFO: starting first sync /proc/30876/root/ to /var/lib/vz/tmp_backup/vzdumptmp24824_160/
INFO: first sync finished - transferred 1.22G bytes in 7s
INFO: suspending guest
INFO: starting final sync /proc/30876/root/ to /var/lib/vz/tmp_backup/vzdumptmp24824_160/
INFO: final sync finished - transferred 153.56M bytes in 1s
INFO: resuming guest
INFO: guest is online again after 1 seconds
INFO: creating vzdump archive '/mnt/pve/backupstore/dump/vzdump-lxc-160-2021_04_07-16_04_31.tar.zst'
INFO: Total bytes written: 1260093440 (1.2GiB, 178MiB/s)
INFO: archive file size: 358MB
INFO: Finished Backup of VM 160 (00:00:19)
INFO: Backup finished at 2021-04-07 16:04:50
INFO: Backup job finished successfully

And here is the same for the bugged CT ID 161:
Code:
vzdump 161 --compress zstd --node <buggedhost> --storage backupstore --mode snapshot --remove 1
INFO: starting new backup job: vzdump 161 --mode snapshot --node <buggedhost> --storage backupstore --remove 1 --compress zstd
INFO: Starting Backup of VM 161 (lxc)
INFO: Backup started at 2021-04-07 16:04:56
INFO: status = running
INFO: CT Name: <removed>
INFO: including mount point rootfs ('/') in backup
INFO: mode failure - some volumes do not support snapshots
INFO: trying 'suspend' mode instead
INFO: backup mode: suspend
INFO: ionice priority: 7
INFO: CT Name: <removed>
INFO: including mount point rootfs ('/') in backup
INFO: starting first sync /proc/7858/root/ to /var/lib/vz/tmp_backup/vzdumptmp15971_161/
INFO: first sync finished - transferred 2.04G bytes in 11s
INFO: suspending guest
INFO: starting final sync /proc/7858/root/ to /var/lib/vz/tmp_backup/vzdumptmp15971_161/
INFO: final sync finished - transferred 0 bytes in 0s
INFO: resuming guest
INFO: guest is online again after <1 seconds
INFO: creating vzdump archive '/mnt/pve/backupstore/dump/vzdump-lxc-161-2021_04_07-16_04_56.tar.zst'
INFO: Total bytes written: 1805178880 (1.7GiB, 182MiB/s)
INFO: archive file size: 497MB
INFO: removing backup 'backupstore:backup/vzdump-lxc-157-2021_04_07-11_23_05.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-158-2021_04_07-11_23_21.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-159-2021_04_07-11_23_49.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-160-2021_04_07-11_24_09.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-161-2021_04_07-11_24_26.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-163-2021_04_07-11_25_02.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-164-2021_04_07-11_25_20.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-165-2021_04_07-11_25_36.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-166-2021_04_07-11_25_55.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-167-2021_04_07-11_26_08.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-168-2021_04_07-11_26_19.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-169-2021_04_07-11_26_30.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-170-2021_04_07-11_28_12.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-171-2021_04_07-11_28_17.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-172-2021_04_07-11_28_32.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-173-2021_04_07-11_28_49.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-174-2021_04_07-11_29_11.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-175-2021_04_07-11_29_26.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-176-2021_04_07-11_29_48.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-177-2021_04_07-11_30_02.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-178-2021_04_07-11_30_19.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-179-2021_04_07-11_30_39.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-180-2021_04_07-11_30_54.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-182-2021_04_07-11_31_19.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-183-2021_04_07-11_31_37.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-184-2021_04_07-11_32_12.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-185-2021_04_07-11_32_36.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-186-2021_04_07-11_32_46.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-187-2021_04_07-11_33_06.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-188-2021_04_07-11_33_16.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-189-2021_04_07-11_33_42.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-190-2021_04_07-11_33_54.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-191-2021_04_07-11_34_13.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-192-2021_04_07-11_34_28.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-193-2021_04_07-11_34_47.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-194-2021_04_07-11_35_06.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-195-2021_04_07-11_35_43.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-196-2021_04_07-11_36_08.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-197-2021_04_07-11_36_22.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-198-2021_04_07-11_36_37.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-199-2021_04_07-11_37_10.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-200-2021_04_07-11_37_26.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-201-2021_04_07-11_37_39.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-202-2021_04_07-11_37_54.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-203-2021_04_07-11_38_08.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-204-2021_04_07-11_38_31.tar.zst'
INFO: Finished Backup of VM 161 (00:00:32)
INFO: Backup finished at 2021-04-07 16:05:28
INFO: Backup job finished successfully

Also, it is not always CT 161 that is bugged. On another host it is CT 178 that is causing the exact same behavior. All hosts are standalone (no cluster), backing up to their own external backup storage and not all hosts are at the most recent version, but at least 6.3.2. I fear that the normal node I tested will show the exact same behavior once updated. Should we try this?
 

Attachments

  • prunebackups-buggedhost.txt
    22.6 KB · Views: 6
  • prunebackups-buggedhost-160.txt
    1.3 KB · Views: 3
  • prunebackups-buggedhost-161.txt
    23 KB · Views: 3
  • prunebackups-buggedhost-lxc.txt
    22.6 KB · Views: 3
  • prunebackups-normalhost.txt
    183 KB · Views: 2
  • prunebackups-normalhost-160.txt
    2.2 KB · Views: 1
  • prunebackups-normalhost-161.txt
    2.2 KB · Views: 2
  • prunebackups-normalhost-lxc.txt
    181.7 KB · Views: 2

Fabian_E

Proxmox Staff Member
Staff member
Aug 1, 2019
928
122
43
Thank you so much for a reply already.



I don't use any hook-scripts. Here is the vzdump config, it is identical for all hosts:
Code:
# vzdump default settings

tmpdir: /var/lib/vz/tmp_backup/
#dumpdir: DIR
#storage: STORAGE_ID
#mode: snapshot|suspend|stop
#bwlimit: KBPS
#ionice: PRI
#lockwait: MINUTES
#stopwait: MINUTES
#size: MB
#stdexcludes: BOOLEAN
#mailto: ADDRESSLIST
#maxfiles: N
#script: FILENAME
#exclude-path: PATHLIST
#pigz: N

And here is the config file for CT 161:
Code:
arch: amd64
cores: 1
hostname: <removed>
memory: 1024
nameserver: <removed>
net0: name=eth0,bridge=vmbr0,gw=<removed>,gw6=<removed>,hwaddr=<removed>,ip=<removed>,ip6=<removed>,type=veth
ostype: ubuntu
rootfs: local:161/vm-161-disk-0.raw,size=5G
searchdomain: <removed>
swap: 0
unprivileged: 1
lxc.cgroup.pids.max: 500
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file




I updated the 2 new packages on the bugged host, here is pveversion output now:
Code:
proxmox-ve: 6.3-1 (running kernel: 5.4.101-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-8
pve-kernel-helper: 6.3-8
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.101-1-pve: 5.4.101-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.0.21-5-pve: 5.0.21-10
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.8
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.13-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-9
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-5
pve-xtermjs: 4.7.0-3
pve-zsync: 2.0-4
qemu-server: 6.3-10
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1

It will keep 1 backup of 5 VPS before CT 161, so the latest backup of CT 157 is kept. All before get deleted every night. I guess this is because I configured it to keep the last 5 backups in the storage configuration. Please see the attached files for the output of those commands. I think they show the bug perfectly. It wants to delete all backups for VM 161, but not other IDs.

Here is the CLI output of a manual backup of CT 160 that has not the bug:
Code:
vzdump 160 --compress zstd --node <buggedhost> --storage backupstore --mode snapshot --remove 1
INFO: starting new backup job: vzdump 160 --node <buggedhost> --remove 1 --mode snapshot --storage backupstore --compress zstd
INFO: Starting Backup of VM 160 (lxc)
INFO: Backup started at 2021-04-07 16:04:31
INFO: status = running
INFO: CT Name: <removed>
INFO: including mount point rootfs ('/') in backup
INFO: mode failure - some volumes do not support snapshots
INFO: trying 'suspend' mode instead
INFO: backup mode: suspend
INFO: ionice priority: 7
INFO: CT Name: <removed>
INFO: including mount point rootfs ('/') in backup
INFO: starting first sync /proc/30876/root/ to /var/lib/vz/tmp_backup/vzdumptmp24824_160/
INFO: first sync finished - transferred 1.22G bytes in 7s
INFO: suspending guest
INFO: starting final sync /proc/30876/root/ to /var/lib/vz/tmp_backup/vzdumptmp24824_160/
INFO: final sync finished - transferred 153.56M bytes in 1s
INFO: resuming guest
INFO: guest is online again after 1 seconds
INFO: creating vzdump archive '/mnt/pve/backupstore/dump/vzdump-lxc-160-2021_04_07-16_04_31.tar.zst'
INFO: Total bytes written: 1260093440 (1.2GiB, 178MiB/s)
INFO: archive file size: 358MB
INFO: Finished Backup of VM 160 (00:00:19)
INFO: Backup finished at 2021-04-07 16:04:50
INFO: Backup job finished successfully

And here is the same for the bugged CT ID 161:
Code:
vzdump 161 --compress zstd --node <buggedhost> --storage backupstore --mode snapshot --remove 1
INFO: starting new backup job: vzdump 161 --mode snapshot --node <buggedhost> --storage backupstore --remove 1 --compress zstd
INFO: Starting Backup of VM 161 (lxc)
INFO: Backup started at 2021-04-07 16:04:56
INFO: status = running
INFO: CT Name: <removed>
INFO: including mount point rootfs ('/') in backup
INFO: mode failure - some volumes do not support snapshots
INFO: trying 'suspend' mode instead
INFO: backup mode: suspend
INFO: ionice priority: 7
INFO: CT Name: <removed>
INFO: including mount point rootfs ('/') in backup
INFO: starting first sync /proc/7858/root/ to /var/lib/vz/tmp_backup/vzdumptmp15971_161/
INFO: first sync finished - transferred 2.04G bytes in 11s
INFO: suspending guest
INFO: starting final sync /proc/7858/root/ to /var/lib/vz/tmp_backup/vzdumptmp15971_161/
INFO: final sync finished - transferred 0 bytes in 0s
INFO: resuming guest
INFO: guest is online again after <1 seconds
INFO: creating vzdump archive '/mnt/pve/backupstore/dump/vzdump-lxc-161-2021_04_07-16_04_56.tar.zst'
INFO: Total bytes written: 1805178880 (1.7GiB, 182MiB/s)
INFO: archive file size: 497MB
INFO: removing backup 'backupstore:backup/vzdump-lxc-157-2021_04_07-11_23_05.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-158-2021_04_07-11_23_21.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-159-2021_04_07-11_23_49.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-160-2021_04_07-11_24_09.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-161-2021_04_07-11_24_26.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-163-2021_04_07-11_25_02.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-164-2021_04_07-11_25_20.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-165-2021_04_07-11_25_36.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-166-2021_04_07-11_25_55.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-167-2021_04_07-11_26_08.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-168-2021_04_07-11_26_19.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-169-2021_04_07-11_26_30.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-170-2021_04_07-11_28_12.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-171-2021_04_07-11_28_17.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-172-2021_04_07-11_28_32.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-173-2021_04_07-11_28_49.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-174-2021_04_07-11_29_11.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-175-2021_04_07-11_29_26.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-176-2021_04_07-11_29_48.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-177-2021_04_07-11_30_02.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-178-2021_04_07-11_30_19.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-179-2021_04_07-11_30_39.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-180-2021_04_07-11_30_54.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-182-2021_04_07-11_31_19.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-183-2021_04_07-11_31_37.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-184-2021_04_07-11_32_12.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-185-2021_04_07-11_32_36.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-186-2021_04_07-11_32_46.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-187-2021_04_07-11_33_06.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-188-2021_04_07-11_33_16.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-189-2021_04_07-11_33_42.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-190-2021_04_07-11_33_54.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-191-2021_04_07-11_34_13.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-192-2021_04_07-11_34_28.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-193-2021_04_07-11_34_47.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-194-2021_04_07-11_35_06.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-195-2021_04_07-11_35_43.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-196-2021_04_07-11_36_08.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-197-2021_04_07-11_36_22.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-198-2021_04_07-11_36_37.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-199-2021_04_07-11_37_10.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-200-2021_04_07-11_37_26.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-201-2021_04_07-11_37_39.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-202-2021_04_07-11_37_54.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-203-2021_04_07-11_38_08.tar.zst'
INFO: removing backup 'backupstore:backup/vzdump-lxc-204-2021_04_07-11_38_31.tar.zst'
INFO: Finished Backup of VM 161 (00:00:32)
INFO: Backup finished at 2021-04-07 16:05:28
INFO: Backup job finished successfully

Also, it is not always CT 161 that is bugged. On another host it is CT 178 that is causing the exact same behavior. All hosts are standalone (no cluster), backing up to their own external backup storage and not all hosts are at the most recent version, but at least 6.3.2. I fear that the normal node I tested will show the exact same behavior once updated. Should we try this?
Thank you very much for providing all this information. It seems like the content listing does not correctly filter for certain VM IDs, as the prunebackups-buggedhost-161.txt output clearly shows.

Please run debsums libpve-storage-perl to check the integrity of the package's files (you might need to install debsums first).

The following is a short Perl script that basically does the same filtering by VM ID. It lists the files in the backup directory and then matches each (non-directory) entry against two patterns.
Code:
cat backup_list.pl
Code:
#!/bin/perl

use strict;
use warnings;

use File::stat qw();
use POSIX;

use PVE::Storage;
use PVE::Storage::Plugin;

use constant COMPRESSOR_RE => 'gz|lzo|zst';

my $storeid = $ARGV[0];
my $vmid = $ARGV[1];
die "no store id provided\n" if !defined($storeid);
die "no vm id provided\n" if !defined($vmid);

my $cfg = PVE::Storage::config();
my $scfg = PVE::Storage::storage_config($cfg, $storeid);

my $path = PVE::Storage::Plugin->get_subdir($scfg, 'backup');
print "directory path is '$path'\n";

foreach my $fn (<$path/*>) {
    my $st = File::stat::stat($fn);

    next if (!$st || S_ISDIR($st->mode));

    print "filename: '$fn'\n";

    next if defined($vmid) && $fn !~  m/\S+-$vmid-\S+/;

    print "matched first pattern\n";

    next if $fn !~ m!/([^/]+\.(tgz|(?:(?:tar|vma)(?:\.(${\COMPRESSOR_RE}))?)))$!;

    print "and second pattern\n";
}

Please run it with perl backup_list.pl backupstore 161 and perl backup_list.pl backupstore 160 on the problematic host.
 

Klaus Schmidt

New Member
Apr 4, 2021
4
0
1
21
Here is the requested output:
Code:
debsums libpve-storage-perl
/usr/sbin/pvesm                                                               OK
/usr/share/bash-completion/completions/pvesm                                  OK
/usr/share/doc/libpve-storage-perl/SOURCE                                     OK
/usr/share/doc/libpve-storage-perl/changelog.Debian.gz                        OK
/usr/share/doc/libpve-storage-perl/copyright                                  OK
/usr/share/man/man1/pvesm.1.gz                                                OK
/usr/share/perl5/PVE/API2/Disks.pm                                            OK
/usr/share/perl5/PVE/API2/Disks/Directory.pm                                  OK
/usr/share/perl5/PVE/API2/Disks/LVM.pm                                        OK
/usr/share/perl5/PVE/API2/Disks/LVMThin.pm                                    OK
/usr/share/perl5/PVE/API2/Disks/ZFS.pm                                        OK
/usr/share/perl5/PVE/API2/Storage/Config.pm                                   OK
/usr/share/perl5/PVE/API2/Storage/Content.pm                                  OK
/usr/share/perl5/PVE/API2/Storage/PruneBackups.pm                             OK
/usr/share/perl5/PVE/API2/Storage/Scan.pm                                     OK
/usr/share/perl5/PVE/API2/Storage/Status.pm                                   OK
/usr/share/perl5/PVE/CLI/pvesm.pm                                             OK
/usr/share/perl5/PVE/CephConfig.pm                                            OK
/usr/share/perl5/PVE/Diskmanage.pm                                            OK
/usr/share/perl5/PVE/Storage.pm                                               OK
/usr/share/perl5/PVE/Storage/CIFSPlugin.pm                                    OK
/usr/share/perl5/PVE/Storage/CephFSPlugin.pm                                  OK
/usr/share/perl5/PVE/Storage/DRBDPlugin.pm                                    OK
/usr/share/perl5/PVE/Storage/DirPlugin.pm                                     OK
/usr/share/perl5/PVE/Storage/GlusterfsPlugin.pm                               OK
/usr/share/perl5/PVE/Storage/ISCSIDirectPlugin.pm                             OK
/usr/share/perl5/PVE/Storage/ISCSIPlugin.pm                                   OK
/usr/share/perl5/PVE/Storage/LVMPlugin.pm                                     OK
/usr/share/perl5/PVE/Storage/LunCmd/Comstar.pm                                OK
/usr/share/perl5/PVE/Storage/LunCmd/Iet.pm                                    OK
/usr/share/perl5/PVE/Storage/LunCmd/Istgt.pm                                  OK
/usr/share/perl5/PVE/Storage/LunCmd/LIO.pm                                    OK
/usr/share/perl5/PVE/Storage/LvmThinPlugin.pm                                 OK
/usr/share/perl5/PVE/Storage/NFSPlugin.pm                                     OK
/usr/share/perl5/PVE/Storage/PBSPlugin.pm                                     OK
/usr/share/perl5/PVE/Storage/Plugin.pm                                        OK
/usr/share/perl5/PVE/Storage/RBDPlugin.pm                                     OK
/usr/share/perl5/PVE/Storage/ZFSPlugin.pm                                     OK
/usr/share/perl5/PVE/Storage/ZFSPoolPlugin.pm                                 OK
/usr/share/zsh/vendor-completions/_pvesm                                      OK

Code:
perl backup_list.pl backupstore 160
directory path is '/mnt/pve/backupstore/dump'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-132-2021_03_14-11_12_36.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-134-2021_04_03-11_13_13.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-136-2021_04_03-11_13_45.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-142-2021_03_14-11_16_25.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-142-2021_03_15-11_16_45.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-142-2021_03_16-11_14_50.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-142-2021_03_17-11_15_11.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-142-2021_03_18-11_14_53.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-142-2021_03_19-11_13_18.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-146-2021_03_14-11_18_12.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-146-2021_03_15-11_18_42.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-146-2021_03_16-11_16_34.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-146-2021_03_17-11_17_01.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-159-2021_03_28-01_14_15.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-160-2021_04_07-16_04_31.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-160-2021_04_07-16_04_31.tar.zst'
matched first pattern
and second pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-161-2021_04_07-16_04_56.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-161-2021_04_07-16_04_56.tar.zst'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-169-2021_04_02-12_50_37.tar.dat'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-172-2021_03_14-11_27_31.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-172-2021_03_15-11_27_32.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-172-2021_03_16-11_25_13.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-172-2021_03_17-11_25_11.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-172-2021_03_18-11_25_32.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-172-2021_03_19-11_22_48.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-193-2021_03_28-03_21_08.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-194-2021_03_28-11_30_34.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-194-2021_03_29-11_30_00.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-194-2021_03_30-11_31_53.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-194-2021_03_31-11_31_24.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-206-2021_04_07-11_38_39.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-206-2021_04_07-11_38_39.tar.zst'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-208-2021_04_07-11_38_55.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-208-2021_04_07-11_38_55.tar.zst'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-209-2021_04_07-11_39_05.log'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-209-2021_04_07-11_39_05.tar.zst'

Code:
perl backup_list.pl backupstore 161
directory path is '/mnt/pve/backupstore/dump'
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-132-2021_03_14-11_12_36.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-134-2021_04_03-11_13_13.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-136-2021_04_03-11_13_45.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-142-2021_03_14-11_16_25.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-142-2021_03_15-11_16_45.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-142-2021_03_16-11_14_50.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-142-2021_03_17-11_15_11.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-142-2021_03_18-11_14_53.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-142-2021_03_19-11_13_18.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-146-2021_03_14-11_18_12.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-146-2021_03_15-11_18_42.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-146-2021_03_16-11_16_34.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-146-2021_03_17-11_17_01.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-159-2021_03_28-01_14_15.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-160-2021_04_07-16_04_31.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-160-2021_04_07-16_04_31.tar.zst'
matched first pattern
and second pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-161-2021_04_07-16_04_56.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-161-2021_04_07-16_04_56.tar.zst'
matched first pattern
and second pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-169-2021_04_02-12_50_37.tar.dat'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-172-2021_03_14-11_27_31.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-172-2021_03_15-11_27_32.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-172-2021_03_16-11_25_13.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-172-2021_03_17-11_25_11.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-172-2021_03_18-11_25_32.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-172-2021_03_19-11_22_48.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-193-2021_03_28-03_21_08.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-194-2021_03_28-11_30_34.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-194-2021_03_29-11_30_00.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-194-2021_03_30-11_31_53.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-194-2021_03_31-11_31_24.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-206-2021_04_07-11_38_39.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-206-2021_04_07-11_38_39.tar.zst'
matched first pattern
and second pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-208-2021_04_07-11_38_55.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-208-2021_04_07-11_38_55.tar.zst'
matched first pattern
and second pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-209-2021_04_07-11_39_05.log'
matched first pattern
filename: '/mnt/pve/backupstore/dump/vzdump-lxc-209-2021_04_07-11_39_05.tar.zst'
matched first pattern
and second pattern
 

Fabian_E

Proxmox Staff Member
Staff member
Aug 1, 2019
928
122
43
Very strange, maybe hints at an issue/incompatibility with perl itself. Could you run perl --version and debsums -s perl (with -s it will only print modified/corrupt files)?

Please also run the following
Code:
cat reproducer.pl
Code:
#!/bin/perl

use strict;
use warnings;

my $vmid = 161;
my $fn = "/mnt/pve/backupstore/dump/vzdump-lxc-160-2021_04_07-16_04_31.tar.zst";

if (defined($vmid) && $fn !~  m/\S+-$vmid-\S+/) {
    print "no match\n";
} else {
    print "match\n";
}

$fn = "/mnt/pve/backupstore/dump/vzdump-lxc-161-2021_04_07-16_04_56.tar.zst";

if (defined($vmid) && $fn !~  m/\S+-$vmid-\S+/) {
    print "no match\n";
} else {
    print "match\n";
}
with
Code:
perl reproducer.pl
 

Klaus Schmidt

New Member
Apr 4, 2021
4
0
1
21
The perl version is identical on all hosts, with or without the bug:
Code:
perl --version

This is perl 5, version 28, subversion 1 (v5.28.1) built for x86_64-linux-gnu-thread-multi
(with 65 registered patches, see perl -V for more detail)

Copyright 1987-2018, Larry Wall

Perl may be copied only under the terms of either the Artistic License or the
GNU General Public License, which may be found in the Perl 5 source kit.

Complete documentation for Perl, including FAQ lists, should be found on
this system using "man perl" or "perldoc perl".  If you have access to the
Internet, point your browser at http://www.perl.org/, the Perl Home Page.
There is also no corruption:
Code:
debsums -s perl
(no output)

I ran this modified version of your script:
Code:
#!/bin/perl

use strict;
use warnings;

my $vmid = 161;
my $fn = "/mnt/pve/backupstore/dump/vzdump-lxc-160-2021_04_07-16_04_31.tar.zst";

if (defined($vmid) && $fn !~  m/\S+-$vmid-\S+/) {
    print "no match\n";
} else {
    print "match\n";
}

$fn = "/mnt/pve/backupstore/dump/vzdump-lxc-161-2021_04_07-16_04_56.tar.zst";

if (defined($vmid) && $fn !~  m/\S+-$vmid-\S+/) {
    print "no match\n";
} else {
    print "match\n";
}

$fn = "/mnt/pve/myactualbackupstore-161-1/dump/vzdump-lxc-160-2021_04_07-16_04_31.tar.zst";

if (defined($vmid) && $fn !~  m/\S+-$vmid-\S+/) {
    print "no match\n";
} else {
    print "match\n";
}

$fn = "/mnt/pve/myactualbackupstore-161-1/dump/vzdump-lxc-161-2021_04_07-16_04_56.tar.zst";

if (defined($vmid) && $fn !~  m/\S+-$vmid-\S+/) {
    print "no match\n";
} else {
    print "match\n";
}

This was the output:
Code:
perl reproducer.pl
no match
match
match
match

Looking at the name of my backup storage, I noticed it includes "-161-" in the name. And on the other problematic hosts, the backup storage also includes numbers like "-178-" which matches the backup bug there. Other servers are not affected because their number is not yet used as a VMID.

So basically you shouldn't use storage names like "...-number-..." or you can run into this bug once the number is a VMID that gets backed up.
Renaming the storage to something like just "backup" solves this for me. We will adjust our naming scheme for backup storages to fix this on all hosts.

Maybe you can implement in future versions of Proxmox this notice or just forbid "-" in storage names altogether. Or match the file name only, not the whole path.

Thank you very much for your time and helping me understand this bug!
 

Fabian_E

Proxmox Staff Member
Staff member
Aug 1, 2019
928
122
43
The perl version is identical on all hosts, with or without the bug:
Code:
perl --version

This is perl 5, version 28, subversion 1 (v5.28.1) built for x86_64-linux-gnu-thread-multi
(with 65 registered patches, see perl -V for more detail)

Copyright 1987-2018, Larry Wall

Perl may be copied only under the terms of either the Artistic License or the
GNU General Public License, which may be found in the Perl 5 source kit.

Complete documentation for Perl, including FAQ lists, should be found on
this system using "man perl" or "perldoc perl".  If you have access to the
Internet, point your browser at http://www.perl.org/, the Perl Home Page.
There is also no corruption:
Code:
debsums -s perl
(no output)

I ran this modified version of your script:
Code:
#!/bin/perl

use strict;
use warnings;

my $vmid = 161;
my $fn = "/mnt/pve/backupstore/dump/vzdump-lxc-160-2021_04_07-16_04_31.tar.zst";

if (defined($vmid) && $fn !~  m/\S+-$vmid-\S+/) {
    print "no match\n";
} else {
    print "match\n";
}

$fn = "/mnt/pve/backupstore/dump/vzdump-lxc-161-2021_04_07-16_04_56.tar.zst";

if (defined($vmid) && $fn !~  m/\S+-$vmid-\S+/) {
    print "no match\n";
} else {
    print "match\n";
}

$fn = "/mnt/pve/myactualbackupstore-161-1/dump/vzdump-lxc-160-2021_04_07-16_04_31.tar.zst";

if (defined($vmid) && $fn !~  m/\S+-$vmid-\S+/) {
    print "no match\n";
} else {
    print "match\n";
}

$fn = "/mnt/pve/myactualbackupstore-161-1/dump/vzdump-lxc-161-2021_04_07-16_04_56.tar.zst";

if (defined($vmid) && $fn !~  m/\S+-$vmid-\S+/) {
    print "no match\n";
} else {
    print "match\n";
}

This was the output:
Code:
perl reproducer.pl
no match
match
match
match

Looking at the name of my backup storage, I noticed it includes "-161-" in the name. And on the other problematic hosts, the backup storage also includes numbers like "-178-" which matches the backup bug there. Other servers are not affected because their number is not yet used as a VMID.

So basically you shouldn't use storage names like "...-number-..." or you can run into this bug once the number is a VMID that gets backed up.
Renaming the storage to something like just "backup" solves this for me. We will adjust our naming scheme for backup storages to fix this on all hosts.
Luckily not an issue with perl after all.

Maybe you can implement in future versions of Proxmox this notice or just forbid "-" in storage names altogether.
It's not about the storage name, but about the actual path (which might be different). And we cannot really forbid - because of backwards compatibility.

Or match the file name only, not the whole path.
I already sent a patch doing just that after my first post. Seems like we could have saved a bit of time by either you mentioning that the actual storage name/path is different, or me mentioning the patch. But I'm glad the mystery is solved now.

Thank you very much for your time and helping me understand this bug!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!