[SOLVED] backing up template did not work

RobFantini

Famous Member
May 24, 2012
2,009
102
133
Boston,Mass
FYI
on a node that had 10 successful backup to pbs, the template failed

Code:
command '/usr/bin/proxmox-backup-client backup --repository pbs-user@pbs@10.1.10.80:backups --backup-type vm --backup-id 118 --backup-time 1594492875 qemu-server.conf:/var/tmp/vzdumptmp747002/qemu-server.conf 'drive-scsi0.img:rbd:nvme-4tb/base-118-disk-0:conf=/etc/pve/ceph.conf'' failed: exit code 255
 
Code:
INFO: starting template backup
INFO: /usr/bin/proxmox-backup-client backup --repository pbs-user@pbs@10.1.10.80:backups --backup-type vm --backup-id 118 --backup-time 1594492875 qemu-server.conf:/var/tmp/vzdumptmp747002/qemu-server.conf drive-scsi0.img:rbd:nvme-4tb/base-118-disk-0:conf=/etc/pve/ceph.conf
INFO: Error: unable to access 'rbd:nvme-4tb/base-118-disk-0:conf=/etc/pve/ceph.conf' - No such file or directory (os error 2)
ERROR: Backup of VM 118 failed - command '/usr/bin/proxmox-backup-client backup --repository pbs-user@pbs@10.1.10.80:backups --backup-type vm --backup-id 118 --backup-time 1594492875 qemu-server.conf:/var/tmp/vzdumptmp747002/qemu-server.conf 'drive-scsi0.img:rbd:nvme-4tb/base-118-disk-0:conf=/etc/pve/ceph.conf'' failed: exit code 255
INFO: Failed at 2020-07-11 14:41:15
 
Thans for the report. We'll look into it, backing up templates is its own code path, so there could be something off.

drive-scsi0.img:rbd:nvme-4tb/base-118-disk-0:conf=/etc/pve/ceph.conf INFO: Error: unable to access 'rbd:nvme-4tb/base-118-disk-0:conf=/etc/pve/ceph.conf' - No such file or directory (os error 2)

That line looks a bit strange to me, could you please also post the templates config?
 
Code:
bootdisk: scsi0
cores: 1
ide2: none,media=cdrom
memory: 2048
name: template-buster-kvm
net0: virtio=BE:4F:53:7F:EF:94,bridge=vmbr0,tag=3
numa: 0
ostype: l26
protection: 1
scsi0: nvme-4tb:base-118-disk-0,cache=writeback,discard=on,size=8G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=2e2be902-3254-4c20-a7d6-757baa80397e
sockets: 2
template: 1
vmgenid: 8e023732-3ba3-4438-b86f-b6804b976e85
 
the fix is already available up to pve-enterprise since last week (in qemu-server 6.2-14). if you still have problems backing up VM templates, please post the full config, pveversion -v output and logs!
 
the fix is already available up to pve-enterprise since last week (in qemu-server 6.2-14). if you still have problems backing up VM templates, please post the full config, pveversion -v output and logs!

Here the log:
INFO: starting new backup job: vzdump 144 --mode snapshot --node hal9030 --storage pvebackup --remove 0 INFO: Starting Backup of VM 144 (qemu) INFO: Backup started at 2020-09-07 09:26:06 INFO: status = stopped INFO: backup mode: stop INFO: ionice priority: 7 INFO: VM Name: Win7x64 INFO: include disk 'ide0' 'vmdata1zfs:base-144-disk-0' 48G INFO: creating Proxmox Backup Server archive 'vm/144/2020-09-07T07:26:06Z' INFO: starting kvm to execute backup task kvm: -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=100: Block node is read-only ERROR: Backup of VM 144 failed - start failed: QEMU exited with code 1 INFO: Failed at 2020-09-07 09:26:07 INFO: Backup job finished with errors TASK ERROR: job errors

pveversion:

proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.2-11 (running version: 6.2-11/22fb4983)
pve-kernel-5.4: 6.2-6
pve-kernel-helper: 6.2-6
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.60-1-pve: 5.4.60-1
pve-kernel-5.4.55-1-pve: 5.4.55-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.13.13-4-pve: 4.13.13-35
pve-kernel-4.13.13-3-pve: 4.13.13-34
pve-kernel-4.13.13-2-pve: 4.13.13-33
pve-kernel-4.13.13-1-pve: 4.13.13-31
pve-kernel-4.13.8-3-pve: 4.13.8-30
pve-kernel-4.13.8-2-pve: 4.13.8-28
pve-kernel-4.13.8-1-pve: 4.13.8-27
pve-kernel-4.13.4-1-pve: 4.13.4-26
pve-kernel-4.10.17-4-pve: 4.10.17-24
pve-kernel-4.10.17-2-pve: 4.10.17-20
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve2
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-1
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-10
pve-cluster: 6.1-8
pve-container: 3.1-13
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-13
pve-xtermjs: 4.7.0-2
pve-zsync: 2.0-3
qemu-server: 6.2-14
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve1
 
Can I just install pve-qemu-kvm 5.1 from the pvetest? How to do this (can I specficy the pbetest in an apt update command)? Or do I have to switch to pevtest completely?
 
You can add the pvetest repository to the sources.list: https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_test_repo

Then do the following steps:
Bash:
apt update
apt install pve-qemu-kvm

After above finished, you can drop the entry for the pvetest repository, and run apt update again. This way, only the pve-qemu-kvm package is upgraded.

Naturally you could also download the respective .deb file and install it with apt install ./path/to/package.deb, but that's more error prone (e.g., if the package would need updated dependencies).
 
Still not working:

root@xxxxx:~# pveversion -v | grep qemu
pve-qemu-kvm: 5.1.0-2
qemu-server: 6.2-14

INFO: starting new backup job: vzdump 144 --node hal9030 --remove 0 --storage pvebackup --mode snapshot
INFO: Starting Backup of VM 144 (qemu)
INFO: Backup started at 2020-09-18 20:40:48
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: Win7x64
INFO: include disk 'ide0' 'vmdata1zfs:base-144-disk-0' 48G
INFO: creating Proxmox Backup Server archive 'vm/144/2020-09-18T18:40:48Z'
INFO: starting kvm to execute backup task
kvm: -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=100: Block node is read-only
ERROR: Backup of VM 144 failed - start failed: QEMU exited with code 1
INFO: Failed at 2020-09-18 20:40:49
INFO: Backup job finished with errors
TASK ERROR: job errors
 
Does not work here with

pve-qemu-kvm: 5.1.0-3
qemu-server: 6.2-17

Log:
INFO: starting new backup job: vzdump 144 --node hal9030 --storage pvebackup --remove 0 --mode snapshot
INFO: Starting Backup of VM 144 (qemu)
INFO: Backup started at 2020-10-30 05:14:14
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: Win7x64
INFO: include disk 'ide0' 'vmdata1zfs:base-144-disk-0' 48G
INFO: creating Proxmox Backup Server archive 'vm/144/2020-10-30T04:14:14Z'
INFO: starting kvm to execute backup task
kvm: -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=100: Block node is read-only
ERROR: Backup of VM 144 failed - start failed: QEMU exited with code 1
INFO: Failed at 2020-10-30 05:14:15
INFO: Backup job finished with errors
TASK ERROR: job errors
 
Same issue here, I have only one template and its backup fails.

The backup of a VM that is a linked clone of the template is OK.

Code:
pve-qemu-kvm: 5.1.0-6
qemu-server: 6.2-19

On first try yesterday:
Code:
164: 2020-11-15 18:46:51 INFO: Starting Backup of VM 164 (qemu)
164: 2020-11-15 18:46:51 INFO: status = stopped
164: 2020-11-15 18:46:51 INFO: backup mode: stop
164: 2020-11-15 18:46:51 INFO: ionice priority: 7
164: 2020-11-15 18:46:51 INFO: VM Name: W10-Template
164: 2020-11-15 18:46:51 INFO: include disk 'ide0' 'SSD:base-164-disk-0' 100G
164: 2020-11-15 18:46:51 INFO: creating Proxmox Backup Server archive 'vm/164/2020-11-15T17:46:51Z'
164: 2020-11-15 18:46:51 INFO: starting kvm to execute backup task
164: 2020-11-15 18:46:53 ERROR: Backup of VM 164 failed - start failed: QEMU exited with code 1

On scheduled backup this night:
Code:
164: 2020-11-16 05:03:57 INFO: Starting Backup of VM 164 (qemu)
164: 2020-11-16 05:03:57 INFO: status = stopped
164: 2020-11-16 05:03:57 INFO: backup mode: stop
164: 2020-11-16 05:03:57 INFO: ionice priority: 7
164: 2020-11-16 05:03:57 INFO: VM Name: W10-Template
164: 2020-11-16 05:03:57 INFO: include disk 'ide0' 'SSD:base-164-disk-0' 100G
164: 2020-11-16 05:03:57 INFO: creating Proxmox Backup Server archive 'vm/164/2020-11-16T04:03:57Z'
164: 2020-11-16 05:03:57 INFO: starting kvm to execute backup task
164: 2020-11-16 05:04:00 ERROR: Backup of VM 164 failed - start failed: QEMU exited with code 1

There are strange errors in the syslog for each backup.
The errors are the same in yesterday's first backup and this morning scheduled backup.
I can't find these errors elsewhere in the syslog.

Code:
Nov 16 05:03:57 pve02 vzdump[91703]: INFO: starting new backup job: vzdump --mode snapshot --mailto support@domain.tld --quiet
 1 --pool VDI --mailnotification failure --storage PBS_NCP
Nov 16 05:03:57 pve02 vzdump[91699]: <root@pam> end task UPID:pve02:00016635:07168B71:5FB1F942:vzdump::root@pam: OK
Nov 16 05:03:57 pve02 vzdump[91703]: INFO: Starting Backup of VM 164 (qemu)
Nov 16 05:03:58 pve02 systemd[1]: Started 164.scope.
Nov 16 05:03:58 pve02 systemd-udevd[94762]: Using default interface naming scheme 'v240'.
Nov 16 05:03:58 pve02 systemd-udevd[94762]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 05:03:58 pve02 systemd-udevd[94762]: Could not generate persistent MAC address for tap164i0: No such file or directory
Nov 16 05:03:59 pve02 kernel: [1189413.278271] device tap164i0 entered promiscuous mode
Nov 16 05:03:59 pve02 systemd-udevd[94762]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 05:03:59 pve02 systemd-udevd[94762]: Could not generate persistent MAC address for fwbr164i0: No such file or directory
Nov 16 05:03:59 pve02 systemd-udevd[94768]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 05:03:59 pve02 systemd-udevd[94768]: Using default interface naming scheme 'v240'.
Nov 16 05:03:59 pve02 systemd-udevd[94768]: Could not generate persistent MAC address for fwpr164p0: No such file or directory
Nov 16 05:03:59 pve02 systemd-udevd[94766]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 16 05:03:59 pve02 systemd-udevd[94766]: Using default interface naming scheme 'v240'.
Nov 16 05:03:59 pve02 systemd-udevd[94766]: Could not generate persistent MAC address for fwln164i0: No such file or directory
Nov 16 05:03:59 pve02 kernel: [1189413.336386] fwbr164i0: port 1(fwln164i0) entered blocking state
Nov 16 05:03:59 pve02 kernel: [1189413.336390] fwbr164i0: port 1(fwln164i0) entered disabled state
Nov 16 05:03:59 pve02 kernel: [1189413.336565] device fwln164i0 entered promiscuous mode
Nov 16 05:03:59 pve02 kernel: [1189413.336663] fwbr164i0: port 1(fwln164i0) entered blocking state
Nov 16 05:03:59 pve02 kernel: [1189413.336666] fwbr164i0: port 1(fwln164i0) entered forwarding state
Nov 16 05:03:59 pve02 kernel: [1189413.344312] vmbr0: port 27(fwpr164p0) entered blocking state
Nov 16 05:03:59 pve02 kernel: [1189413.344316] vmbr0: port 27(fwpr164p0) entered disabled state
Nov 16 05:03:59 pve02 kernel: [1189413.344519] device fwpr164p0 entered promiscuous mode
Nov 16 05:03:59 pve02 kernel: [1189413.345290] vmbr0: port 27(fwpr164p0) entered blocking state
Nov 16 05:03:59 pve02 kernel: [1189413.345294] vmbr0: port 27(fwpr164p0) entered forwarding state
Nov 16 05:03:59 pve02 kernel: [1189413.370225] fwbr164i0: port 2(tap164i0) entered blocking state
Nov 16 05:03:59 pve02 kernel: [1189413.370229] fwbr164i0: port 2(tap164i0) entered disabled state
Nov 16 05:03:59 pve02 kernel: [1189413.370438] fwbr164i0: port 2(tap164i0) entered blocking state
Nov 16 05:03:59 pve02 kernel: [1189413.370441] fwbr164i0: port 2(tap164i0) entered forwarding state
Nov 16 05:03:59 pve02 kernel: [1189413.932733] fwbr164i0: port 2(tap164i0) entered disabled state
Nov 16 05:04:00 pve02 systemd[1]: Starting Proxmox VE replication runner...
Nov 16 05:04:00 pve02 kernel: [1189413.969206] fwbr164i0: port 1(fwln164i0) entered disabled state
Nov 16 05:04:00 pve02 kernel: [1189413.969641] vmbr0: port 27(fwpr164p0) entered disabled state
Nov 16 05:04:00 pve02 kernel: [1189413.971309] device fwln164i0 left promiscuous mode
Nov 16 05:04:00 pve02 kernel: [1189413.971316] fwbr164i0: port 1(fwln164i0) entered disabled state
Nov 16 05:04:00 pve02 kernel: [1189414.008051] device fwpr164p0 left promiscuous mode
Nov 16 05:04:00 pve02 kernel: [1189414.008056] vmbr0: port 27(fwpr164p0) entered disabled state
Nov 16 05:04:00 pve02 systemd[1]: 164.scope: Succeeded.
Nov 16 05:04:00 pve02 vzdump[91703]: ERROR: Backup of VM 164 failed - start failed: QEMU exited with code 1
 
Last edited:
The bug seems to link the issue to sata/ide (vs scsi), but the error in syslog are related to NIC aren't they?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!