Issues backing up VMs

urlsam

New Member
Jan 12, 2021
3
0
1
34
Hi all,

I've installed Proxmox Backup Server and have configured my datastores and added a user with permissions to backup.

I followed the guide on your website to setup a datastore on a Proxmox VE server running the latest release.

LXC containers backup fine, however when backing up a VM, I am presented with the following error.

Code:
INFO: starting new backup job: vzdump 104 --node XXXX --compress zstd --mode stop --remove 0 --storage XXXX-disk1-datastore1
INFO: Starting Backup of VM 104 (qemu)
INFO: Backup started at 2021-01-12 17:49:13
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: test2
INFO: include disk 'scsi0' 'ceph_low:vm-104-disk-0' 32G
INFO: creating pbs archive on storage 'XXXX-disk1-datastore1'
INFO: starting kvm to execute backup task
ERROR: VM 104 qmp command 'backup' failed - proxmox_backup_new failed: unable to parse repository url 'pvebackup@pbs@XXXX:disk1-datastore1'
INFO: stopping kvm after backup task
ERROR: Backup of VM 104 failed - VM 104 qmp command 'backup' failed - proxmox_backup_new failed: unable to parse repository url 'pvebackup@pbs@XXXX:disk1-datastore1'
INFO: Failed at 2021-01-12 17:49:16
INFO: Backup job finished with errors
TASK ERROR: job errors

The user in question for the moment has the "Admin" permission and it is set to propogate.

Can anyone shed any light on this error? It's weird that LXC and Backups with the Client work fine, but VM Backups do not.
 
please include your 'pveversion -v' and 'qm status 104 --verbose' output!
 
Thanks for your reply.

Code:
proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-1
pve-kernel-helper: 6.2-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph: 14.2.16-pve1
ceph-fuse: 14.2.16-pve1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.3
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-5
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

And the latter output is.

Code:
balloon: 2147483648
ballooninfo:
        actual: 2147483648
        max_mem: 2147483648
blockstat:
        scsi0:
                unmap_bytes: 0
                failed_wr_operations: 0
                rd_operations: 1
                failed_flush_operations: 0
                unmap_merged: 0
                wr_operations: 0
                invalid_unmap_operations: 0
                failed_unmap_operations: 0
                invalid_wr_operations: 0
                rd_bytes: 512
                rd_merged: 0
                idle_time_ns: 12491289053
                rd_total_time_ns: 1626297
                wr_highest_offset: 0
                flush_total_time_ns: 0
                unmap_operations: 0
                timed_stats:
                wr_total_time_ns: 0
                invalid_flush_operations: 0
                flush_operations: 0
                unmap_total_time_ns: 0
                failed_rd_operations: 0
                invalid_rd_operations: 0
                wr_merged: 0
                wr_bytes: 0
        ide2:
                rd_total_time_ns: 0
                wr_highest_offset: 0
                unmap_operations: 0
                flush_total_time_ns: 0
                timed_stats:
                rd_bytes: 0
                invalid_wr_operations: 0
                rd_merged: 0
                invalid_rd_operations: 0
                wr_merged: 0
                failed_rd_operations: 0
                unmap_total_time_ns: 0
                flush_operations: 0
                wr_bytes: 0
                invalid_flush_operations: 0
                wr_total_time_ns: 0
                rd_operations: 0
                failed_wr_operations: 0
                failed_flush_operations: 0
                unmap_bytes: 0
                invalid_unmap_operations: 0
                failed_unmap_operations: 0
                unmap_merged: 0
                wr_operations: 0
cpus: 1
disk: 0
diskread: 512
diskwrite: 0
maxdisk: 34359738368
maxmem: 2147483648
mem: 54363246
name: test
netin: 8892
netout: 1756
nics:
        tap104i0:
                netin: 8892
                netout: 1756
pid: 1351067
qmpstatus: running
status: running
template:
uptime: 16
vmid: 104
 
please upgrade to the current package versions..
 
Hi there, upgraded to the latest versions on all Ceph Nodes, and the issue appears to have resolved itself.

Thank you for your assistance.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!