"hv-evmcs" falg have to be removed, after that VM starts with cpu host.
With this config VM is booting on 5.4.x.
agent: 1
balloon: 0
bootdisk: scsi0
cores: 4
cpu: host
memory: 8192
name: Docker1
net0: virtio=8A:B5:07:A4:5B:5D,bridge=vmbr0
numa: 0
ostype: win10
sata0: none,media=cdrom
sata1...
Today I have upgrade with 'apt-get dist-upgrade' from pve-no-subscription and got same issues.
VM's with host cpu and flags=+hv-evmcs cannot be started any more. See error below.
Some time ago I have installed kernel 5.4.27-1-pve, so I boot back to the this kernel. VM still cannot be booted...
Same error occur with virtio-win-0.1.141 drivers on PVE 5.1-35
Windows as well as Linux VM's was involved.
Probably this is non drivers problem, because during same backup Linux VM's also got timeout. Those VM's uses different virtio drivers.
Details:
INFO: starting new backup job: vzdump 100...
https://bugzilla.proxmox.com/show_bug.cgi?id=1420
states, that pve-qemu-kvm >= 2.9.0-5 contains the fix
I got same problem on Proxmox 5.1-35 with Virtio disks. This distro has pve-qemu-kvm_2.9.1-2
Seems to be this is not solved, how to avoid this error?
Log:
INFO: starting new backup job...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.