Proxmox 5-2.1 backup issue

admintmt

Member
Oct 17, 2017
12
0
21
34
Hello everyone.
Sorry for my bad english, i try my best ))

I had fresh install of PVE5-2.1 from ISO. Then i upgrade it to the last version.
I've made VM with Win2012R2 as a guest OS which i wanted to backup but it fails with error
Here is the task log
Code:
INFO: starting new backup job: vzdump 19252 --storage iso --mode stop --quiet 1 --mailnotification failure --compress gzip
INFO: Starting Backup of VM 19252 (qemu)
INFO: status = running
INFO: update VM 19252: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: ohber-srv-02
INFO: include disk 'scsi0' 'local-lvm:vm-19252-disk-1' 999G
INFO: stopping vm
INFO: creating archive '/mnt/pve/iso/dump/vzdump-qemu-19252-2018_06_03-12_10_02.vma.gz'
INFO: starting kvm to execute backup task
malformed JSON string, neither tag, array, object, number, string or atom, at character offset 0 (before "(end of string)") at /usr/share/perl5/PVE/Tools.pm line 949, <GEN103> chunk 1.
ERROR: unable to connect to VM 19252 qmp socket - No such file or directory
INFO: aborting backup job
ERROR: VM 19252 not running
INFO: restarting vm
INFO: malformed JSON string, neither tag, array, object, number, string or atom, at character offset 0 (before "(end of string)") at /usr/share/perl5/PVE/Tools.pm line 949, <GEN37> chunk 1.
INFO: vm is online again after 10 seconds
ERROR: Backup of VM 19252 failed - unable to connect to VM 19252 qmp socket - No such file or directory
INFO: Backup job finished with errors

TASK ERROR: job errors

I've noticed it concerns only stop mode. With snapshot mode backup task completes with no errors.
For stop mode It don't matter either it scheduled backup or manual.
Stop mode backup start successful only when VM isn't running

This problem i had on two nodes with different VMs, but same config.
Thanx in advance

PVE runs on HP Proliant DL360 and 380 Gen7 with Xeon X5670
Here some additional info
Code:
root@ohber-pve-02:~# pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.17-2-pve)
pve-manager: 5.2-1 (running version: 5.2-1/0fcd7879)
pve-kernel-4.15: 5.2-2
pve-kernel-4.15.17-2-pve: 4.15.17-10
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-31
libpve-guest-common-perl: 2.0-16
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
openvswitch-switch: 2.7.0-2
proxmox-widget-toolkit: 1.0-18
pve-cluster: 5.0-27
pve-container: 2.0-23
pve-docs: 5.2-4
pve-firewall: 3.0-9
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-5
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-26
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9
19250 and 19252 VM is the same, i just restored it with that name
Code:
root@ohber-pve-02:~# qm config 19250
agent: 1
balloon: 0
bootdisk: scsi0
cores: 24
cpu: host
ide2: iso:iso/ru_windows_server_2012_r2_vl_with_update_x64_dvd_6052827.iso,media=cdrom,size=5328338K
ide3: iso:iso/virtio-win-0.1.141.iso,media=cdrom,size=309208K
memory: 139264
name: ohber-srv-02
net0: virtio=EE:3C:C3:14:91:A6,bridge=vmbr1
numa: 0
onboot: 1
ostype: win8
scsi0: kkk:vm-19250-disk-1,size=999G
scsihw: virtio-scsi-single
smbios1: uuid=fc38d645-17e4-482a-992e-efdce754d99b
sockets: 1
tablet: 0
Code:
root@ohber-pve-02:~# qm showcmd 19250
/usr/bin/kvm -id 19250 -name ohber-srv-02 -chardev 'socket,id=qmp,path=/var/run/qemu-server/19250.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/19250.pid -daemonize -smbios 'type=1,uuid=fc38d645-17e4-482a-992e-efdce754d99b' -smp '24,sockets=1,cores=24,maxcpus=24' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga std -vnc unix:/var/run/qemu-server/19250.vnc,x509,password -no-hpet -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed' -m 139264 -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.3,chassis_nr=3,bus=pci.0,addr=0x5' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -chardev 'socket,path=/var/run/qemu-server/19250.qga,server,nowait,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:49ad59eb462' -drive 'file=/mnt/pve/iso/template/iso/ru_windows_server_2012_r2_vl_with_update_x64_dvd_6052827.iso,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/mnt/pve/iso/template/iso/virtio-win-0.1.141.iso,if=none,id=drive-ide3,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=1,drive=drive-ide3,id=ide3,bootindex=201' -device 'virtio-scsi-pci,id=virtioscsi0,bus=pci.3,addr=0x1' -drive 'file=/dev/pve/vm-19250-disk-1,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap19250i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=EE:3C:C3:14:91:A6,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -rtc 'driftfix=slew,base=localtime' -global'kvm-pit.lost_tick_policy=discard'
 
Last edited:
Hi,

can you please send you storage config?

Code:
cat /etc/pve/storage.cfg
 
wolfgang, qm showcmd 19250 run as expected with no error. Either VM running or not.
Code:
root@ohber-pve-01:~# qm showcmd 19250 --pretty
/usr/bin/kvm \
  -id 19250 \
  -name ohber-srv-01 \
  -chardev 'socket,id=qmp,path=/var/run/qemu-server/19250.qmp,server,nowait' \
  -mon 'chardev=qmp,mode=control' \
  -pidfile /var/run/qemu-server/19250.pid \
  -daemonize \
  -smbios 'type=1,uuid=d5eb9a7d-36cb-4345-af10-51030be6f039' \
  -smp '24,sockets=1,cores=24,maxcpus=24' \
  -nodefaults \
  -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
  -vga std \
  -vnc unix:/var/run/qemu-server/19250.vnc,x509,password \
  -no-hpet \
  -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed' \
  -m 131072 \
  -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
  -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
  -device 'pci-bridge,id=pci.3,chassis_nr=3,bus=pci.0,addr=0x5' \
  -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
  -readconfig /usr/share/qemu-server/pve-usb.cfg \
  -device 'usb-host,hostbus=5,hostport=2,id=usb1' \
  -device 'usb-host,hostbus=5,hostport=1,id=usb2' \
  -device 'usb-host,hostbus=4,hostport=1,id=usb3' \
  -chardev 'socket,path=/var/run/qemu-server/19250.qga,server,nowait,id=qga0' \
  -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' \
  -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' \
  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:d695c1b9b51a' \
  -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' \
  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
  -device 'virtio-scsi-pci,id=virtioscsi0,bus=pci.3,addr=0x1' \
  -drive 'file=/var/lib/vz/images/19250/vm-19250-disk-1.raw,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on' \
  -device 'scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' \
  -netdev 'type=tap,id=net0,ifname=tap19250i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' \
  -device 'virtio-net-pci,mac=F2:DD:7F:5E:6D:4C,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' \
  -rtc 'driftfix=slew,base=localtime' \
  -global 'kvm-pit.lost_tick_policy=discard'

Code:
root@ohber-pve-01:~# cat /etc/pve/storage.cfg
dir: local
   path /var/lib/vz
   content rootdir,images,backup,iso,vztmpl
   maxfiles 1
   shared 0

dir: local-backup
   path /mnt/local-backup
   content backup
   maxfiles 2
   shared 0

nfs: iso
   export /PRIMMASS/virtmch
   path /mnt/pve/iso
   server 192.168.6.249
   content backup,vztmpl,iso,rootdir,images
   maxfiles 1
   options vers=3

nfs: PVE-ohber-pve
   export /PRIMMASS/PVE-ohber-pve
   path /mnt/pve/PVE-ohber-pve
   server 192.168.6.249
   content images,rootdir,iso,vztmpl,backup
   maxfiles 3
   options vers=3

Code:
root@ohber-pve-01:~# lvs -a
  LV           VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data         pve -wi-ao---- 500.00g                                                   
  local-backup pve -wi-ao---- 355.95g                                                   
  root         pve -wi-ao----  30.00g                                                   
  swap         pve -wi-ao----   8.00g
One notice, it doesn't matter either vm backing up to NFS storage or local, still the same result
 
Yes in a few days.
 
Hello again! Update for perl package has come, i guess that is the right one becuase after upgrade i got new error :)

Code:
INFO: starting new backup job: vzdump 19249 --mailnotification failure --compress gzip --quiet 1 --mode stop --storage local-backup
INFO: Starting Backup of VM 19249 (qemu)
INFO: status = running
INFO: update VM 19249: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: ohber-srv-02
INFO: include disk 'scsi0' 'VG-PVE:vm-19249-disk-1' 999G
INFO: stopping vm
INFO: creating archive '/mnt/local-backup/dump/vzdump-qemu-19249-2018_06_19-12_10_01.vma.gz'
INFO: starting kvm to execute backup task
INFO: restarting vm
INFO: start failed: org.freedesktop.systemd1.UnitExists: Unit 19249.scope already exists.
command 'qm start 19249 --skiplock' failed: exit code 255
ERROR: Backup of VM 19249 failed - start failed: org.freedesktop.systemd1.UnitExists: Unit 19249.scope already exists.
INFO: Backup job finished with errors

TASK ERROR: job errors

I think it is my destiny not to do stop mode backups )))
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!