[SOLVED] Backup config problem

LooneyTunes

Active Member
Jun 1, 2019
203
22
38
Hi,

Just reconfigured my backup job (Datacenter/Backup), excluding one, adding another VM. Hitting 'OK' to save the new settings, then selecting the job and click on "Run now" button. But...

It acts really strange. It _does_ try to backup the excluded machine (101), and entirely neglects the added one(103). Checking the log, proxmox generates errors for the excluded one, despite it is just turned off and excluded. Also noting that the first line does not reflect the changes I did prior to start the job...

Any idea if this is correct behaviour or have I wrongly assumed the "Run now" button would use my current configuration? Thanks

Code:
Aug 16 13:20:15 pve pvedaemon[17200]: INFO: starting new backup job: vzdump 101 100 102 --quiet 1 --all 0 --mailnotification failure --node pve --mode snapshot --compress zstd --storage local
Aug 16 13:20:15 pve pvedaemon[17200]: INFO: Starting Backup of VM 100 (qemu)
Aug 16 13:21:01 pve cron[896]: (*system*vzdump) RELOAD (/etc/cron.d/vzdump)
Aug 16 13:22:23 pve pvedaemon[17200]: INFO: Finished Backup of VM 100 (00:02:08)
Aug 16 13:22:23 pve pvedaemon[17200]: INFO: Starting Backup of VM 101 (qemu)
Aug 16 13:22:24 pve systemd[1]: Started 101.scope.
Aug 16 13:22:24 pve systemd-udevd[17701]: Using default interface naming scheme 'v247'.
Aug 16 13:22:24 pve systemd-udevd[17701]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Aug 16 13:22:24 pve kernel: device tap101i0 entered promiscuous mode
Aug 16 13:22:24 pve systemd-udevd[17700]: Using default interface naming scheme 'v247'.
Aug 16 13:22:24 pve systemd-udevd[17700]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Aug 16 13:22:24 pve systemd-udevd[17700]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Aug 16 13:22:24 pve systemd-udevd[17701]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Aug 16 13:22:24 pve kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Aug 16 13:22:24 pve kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Aug 16 13:22:24 pve kernel: device fwln101i0 entered promiscuous mode
Aug 16 13:22:24 pve kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Aug 16 13:22:24 pve kernel: fwbr101i0: port 1(fwln101i0) entered forwarding state
Aug 16 13:22:24 pve kernel: vmbr0: port 4(fwpr101p0) entered blocking state
Aug 16 13:22:24 pve kernel: vmbr0: port 4(fwpr101p0) entered disabled state
Aug 16 13:22:24 pve kernel: device fwpr101p0 entered promiscuous mode
Aug 16 13:22:24 pve kernel: vmbr0: port 4(fwpr101p0) entered blocking state
Aug 16 13:22:24 pve kernel: vmbr0: port 4(fwpr101p0) entered forwarding state
Aug 16 13:22:24 pve kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Aug 16 13:22:24 pve kernel: fwbr101i0: port 2(tap101i0) entered disabled state
Aug 16 13:22:24 pve kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Aug 16 13:22:24 pve kernel: fwbr101i0: port 2(tap101i0) entered forwarding state
Aug 16 13:23:40 pve kernel: fwbr101i0: port 2(tap101i0) entered disabled state
Aug 16 13:23:40 pve kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Aug 16 13:23:40 pve kernel: vmbr0: port 4(fwpr101p0) entered disabled state
Aug 16 13:23:40 pve kernel: device fwln101i0 left promiscuous mode
Aug 16 13:23:40 pve kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Aug 16 13:23:40 pve kernel: device fwpr101p0 left promiscuous mode
Aug 16 13:23:40 pve kernel: vmbr0: port 4(fwpr101p0) entered disabled state
Aug 16 13:23:40 pve qmeventd[595]: read: Connection reset by peer
Aug 16 13:23:41 pve systemd[1]: 101.scope: Succeeded.
Aug 16 13:23:41 pve systemd[1]: 101.scope: Consumed 13.361s CPU time.
Aug 16 13:23:41 pve qmeventd[18093]: Starting cleanup for 101
Aug 16 13:23:41 pve qmeventd[18093]: Finished cleanup for 101
Aug 16 13:23:41 pve pvedaemon[17200]: INFO: Finished Backup of VM 101 (00:01:18)
Aug 16 13:23:41 pve pvedaemon[17200]: INFO: Starting Backup of VM 102 (qemu)
Aug 16 13:24:19 pve pvedaemon[17200]: INFO: Finished Backup of VM 102 (00:00:38)
Aug 16 13:24:19 pve pvedaemon[17200]: INFO: Backup job finished successfully
Aug 16 13:24:19 pve pvedaemon[956]: <root@pam> end task UPID:pve:00004330:00058F9E:611A49EF:vzdump::root@pam: OK
 
hi,

can you show us /etc/vzdump.cron?

also what is your pveversion -v
 
hi,

can you show us /etc/vzdump.cron?

also what is your pveversion -v
Hi, there is no file called '/etc/vzdump.cron', which perhaps is a clue. I reinstalled the system the other day from the 7.01 ISO, so should be good I hope.
Code:
root@pve:~# cat /etc/vzdump.cron
cat: /etc/vzdump.cron: No such file or directory

root@pve:~# pveversion -v
proxmox-ve: 7.0-2 (running kernel: 5.11.22-3-pve)
pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e)
pve-kernel-5.11: 7.0-6
pve-kernel-helper: 7.0-6
pve-kernel-5.11.22-3-pve: 5.11.22-6
pve-kernel-5.11.22-1-pve: 5.11.22-2
ceph-fuse: 15.2.13-pve1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.2.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-5
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-10
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.8-1
proxmox-backup-file-restore: 2.0.8-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.0-9
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 6.0.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-13
smartmontools: 7.2-1
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1
root@pve:~#
 
ah sorry, i meant /etc/pve/vzdump.cron :)


yes should be fine
That looks like I would have expected. But this is not what Proxmox did, though it showed the correct (below) settings befor launching the backup.
Code:
root@pve:~# cat /etc/pve/vzdump.cron
# cluster wide vzdump cron schedule
# Automatically generated file - do not edit

PATH="/usr/sbin:/usr/bin:/sbin:/bin"

0 0 * * 3,6         root vzdump 103 100 102 --storage local --compress zstd --mode snapshot --mailnotification failure --quiet 1
root@pve:~#
 
excluding one, adding another VM.
when you say exclude, do you mean you unchecked an already included VM (100?). just asking since there's also the option to specifically exclude selected VMs and backup the rest.

how does this look on the GUI? does it act the same way if you hit "Run now" again?
 
Should I trigger the backup again, o
when you say exclude, do you mean you unchecked an already included VM (100?). just asking since there's also the option to specifically exclude selected VMs and backup the rest.

how does this look on the GUI? does it act the same way if you hit "Run now" again?
Yes, I unchecked it (101), didn't know there is an exclusion function as well. (really like ProxMox!) I will run it again now
 
Should I trigger the backup again, o

Yes, I unchecked it (101), didn't know there is an exclusion function as well. (really like ProxMox!) I will run it again now
This time it worked as expected, it backed 100, 102 & 103, (skipping 101). I have a SSD disk under no particular load, so that alone should not cause any delay, otherwise I would have guessed some cache not being updated in time for me to trigger the first backup after reconfiguration. Seems fine now :)
 
This time it worked as expected, it backed 100, 102 & 103, (skipping 101).
great, maybe it was a one time fluke then...
otherwise I would have guessed some cache not being updated in time for me to trigger the first backup after reconfiguration
could still be the case, the /etc/pve/ filesystem is synchronized between the cluster nodes.

but if it's not reproducible then it should be okay (i also couldn't reproduce it either)

thanks for reporting anyway!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!