All VMs don't boot anymore after upgrade to Proxmox 8

reckless

Well-Known Member
Feb 5, 2019
79
4
48
LXC containers work fine after upgrading from Proxmox 7 to 8, but all VMs are not working right now.

This is the error I get:

Code:
root@proxmox:~# qm start 201
start failed: command '/usr/bin/kvm -id 201 -name 'docker,debug-threads=on' -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/201.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/201.pid -daemonize -smbios 'type=1,uuid=dedd51b8-827d-499b-b82a-816512699db6' -smp '16,sockets=1,cores=16,maxcpus=16' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc 'unix:/var/run/qemu-server/201.vnc,password=on' -cpu host,+kvm_pv_eoi,+kvm_pv_unhalt -m 16384 -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'vmgenid,guid=362f642d-05b1-4f88-bc05-5b2cdefc5f45' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'qemu-xhci,p2=15,p3=15,id=xhci,bus=pci.1,addr=0x1b' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'usb-host,bus=xhci.0,port=1,vendorid=0x1a6e,productid=0x089a,id=usb0' -device 'usb-host,bus=xhci.0,port=2,vendorid=0x18d1,productid=0x9302,id=usb1' -device 'VGA,id=vga,bus=pci.0,addr=0x2' -chardev 'socket,path=/var/run/qemu-server/201.qga,server=on,wait=off,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:6959e56c8c90' -drive 'file=/dev/zvol/ssd/vm/vm-201-disk-0,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=io_uring,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap201i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=82:7E:85:6E:45:35,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=1024,bootindex=300' -machine 'type=pc+pve0'' failed: got timeout

In the syslog:

Code:
Jun 29 17:40:55 proxmox pvedaemon[151690]: start VM 201: UPID:proxmox:0002508A:00029A07:649E0877:qmstart:201:root@pam:
Jun 29 17:40:55 proxmox pvedaemon[9583]: <root@pam> starting task UPID:proxmox:0002508A:00029A07:649E0877:qmstart:201:root@pam:
Jun 29 17:40:55 proxmox systemd[1]: Started 201.scope.
Jun 29 17:40:56 proxmox kernel: device tap201i0 entered promiscuous mode
Jun 29 17:40:56 proxmox ovs-vsctl[151794]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap201i0
Jun 29 17:40:56 proxmox ovs-vsctl[151794]: ovs|00002|db_ctl_base|ERR|no port named tap201i0
Jun 29 17:40:56 proxmox ovs-vsctl[151795]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln201i0
Jun 29 17:40:56 proxmox ovs-vsctl[151795]: ovs|00002|db_ctl_base|ERR|no port named fwln201i0
Jun 29 17:40:56 proxmox kernel: vmbr2: port 11(tap201i0) entered blocking state
Jun 29 17:40:56 proxmox kernel: vmbr2: port 11(tap201i0) entered disabled state
Jun 29 17:40:56 proxmox kernel: vmbr2: port 11(tap201i0) entered blocking state
Jun 29 17:40:56 proxmox kernel: vmbr2: port 11(tap201i0) entered forwarding state
Jun 29 17:41:05 proxmox cgroup-network[152099]: running: exec /usr/libexec/netdata/plugins.d/cgroup-network-helper.sh --cgroup '/sys/fs/cgroup/qemu.slice/201.scope'
Jun 29 17:41:05 proxmox pvedaemon[9583]: <root@pam> successful auth for user 'root@pam'
Jun 29 17:41:13 proxmox pvestatd[9553]: VM 201 qmp command failed - VM 201 qmp command 'query-proxmox-support' failed - got timeout
Jun 29 17:41:13 proxmox pvestatd[9553]: status update time (8.670 seconds)
Jun 29 17:41:23 proxmox pvestatd[9553]: VM 201 qmp command failed - VM 201 qmp command 'query-proxmox-support' failed - unable to connect to VM 201 qmp socket - timeout after 51 retries
Jun 29 17:41:24 proxmox pvestatd[9553]: status update time (8.708 seconds)
Jun 29 17:41:25 proxmox pvedaemon[151690]: start failed: command '/usr/bin/kvm -id 201 -name 'docker,debug-threads=on' -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/201.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/201.pid -daemonize -smbios 'type=1,uuid=dedd51b8-827d-499b-b82a-816512699db6' -smp '16,sockets=1,cores=16,maxcpus=16' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc 'unix:/var/run/qemu-server/201.vnc,password=on' -cpu host,+kvm_pv_eoi,+kvm_pv_unhalt -m 16384 -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'vmgenid,guid=362f642d-05b1-4f88-bc05-5b2cdefc5f45' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'qemu-xhci,p2=15,p3=15,id=xhci,bus=pci.1,addr=0x1b' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'usb-host,bus=xhci.0,port=1,vendorid=0x1a6e,productid=0x089a,id=usb0' -device 'usb-host,bus=xhci.0,port=2,vendorid=0x18d1,productid=0x9302,id=usb1' -device 'VGA,id=vga,bus=pci.0,addr=0x2' -chardev 'socket,path=/var/run/qemu-server/201.qga,server=on,wait=off,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:6959e56c8c90' -drive 'file=/dev/zvol/ssd/vm/vm-201-disk-0,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=io_uring,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap201i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=82:7E:85:6E:45:35,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=1024,bootindex=300,host_mtu=9216' -machine 'type=pc+pve0'' failed: got timeout
Jun 29 17:41:25 proxmox pvedaemon[9583]: <root@pam> end task UPID:proxmox:0002508A:00029A07:649E0877:qmstart:201:root@pam: start failed: command '/usr/bin/kvm -id 201 -name 'docker,debug-threads=on' -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/201.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/201.pid -daemonize -smbios 'type=1,uuid=dedd51b8-827d-499b-b82a-816512699db6' -smp '16,sockets=1,cores=16,maxcpus=16' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc 'unix:/var/run/qemu-server/201.vnc,password=on' -cpu host,+kvm_pv_eoi,+kvm_pv_unhalt -m 16384 -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'vmgenid,guid=362f642d-05b1-4f88-bc05-5b2cdefc5f45' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'qemu-xhci,p2=15,p3=15,id=xhci,bus=pci.1,addr=0x1b' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'usb-host,bus=xhci.0,port=1,vendorid=0x1a6e,productid=0x089a,id=usb0' -device 'usb-host,bus=xhci.0,port=2,vendorid=0x18d1,productid=0x9302,id=usb1' -device 'VGA,id=vga,bus=pci.0,addr=0x2' -chardev 'socket,path=/var/run/qemu-server/201.qga,server=on,wait=off,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:6959e56c8c90' -drive 'file=/dev/zvol/ssd/vm/vm-201-disk-0,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=io_uring,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap201i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=82:7E:85:6E:45:35,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=1024,bootindex=300,host_mtu=9216' -machine 'type=pc+pve0'' failed: got timeout
Jun 29 17:41:33 proxmox pvestatd[9553]: VM 201 qmp command failed - VM 201 qmp command 'query-proxmox-support' failed - unable to connect to VM 201 qmp socket - timeout after 51 retries
Jun 29 17:41:34 proxmox pvestatd[9553]: status update time (8.665 seconds)


I have no idea where to start troubleshooting. Any ideas? Here's /etc/network/interfaces just in case:

Code:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

auto eno2
iface eno2 inet manual

auto enp67s0
iface enp67s0 inet manual
    mtu 9216

auto enp67s0d1
iface enp67s0d1 inet manual
    mtu 9216

auto enp1s0f0np0
iface enp1s0f0np0 inet manual
    mtu 9216

auto enp1s0f1np1
iface enp1s0f1np1 inet manual
    mtu 9216

auto bond0
iface bond0 inet manual
    bond-slaves enp67s0 enp67s0d1
    bond-miimon 100
    bond-mode 802.3ad
    bond-xmit-hash-policy layer2+3
    mtu 9216

auto bond1
iface bond1 inet static
    address 192.168.3.3/24
    bond-slaves eno1 eno2
    bond-miimon 100
    bond-mode 802.3ad

iface bond1 inet6 static
    address 2607:ada1:a3a0:be30::3/64

auto vmbr2
iface vmbr2 inet static
    address 192.168.1.2/24
    gateway 192.168.1.1
    bridge-ports enp1s0f0np0
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-400, 4040
    mtu 9216

iface vmbr2 inet6 static
    address 2607:ada1:a3a0:be00::afaa/64
    gateway fe80::34ef:23ff:fe16:9e65
    up echo 0 > /sys/class/net/vmbr2/bridge/multicast_router
    up echo 0 > /sys/class/net/vmbr2/bridge/multicast_snooping

auto vmbr5
iface vmbr5 inet static
    address 192.168.5.11/24
    bridge-ports enp1s0f1np1
    bridge-stp off
    bridge-fd 0
    mtu 9216
 
Last edited:
Hi,
please post the output of pveversion -v and the configuration of an affected VM with qm config <ID> --current. Can you start a VM without any virtual NIC (since the log contains messages about network-related stuff)?
 
Do you really need jumbo frames (mtu 9216) ?
What is the result when you use standard MTU size, or delete the mtu lines in your config?
 
Hello, can you also post the output of ip addr? It is possible that some network interfaces were renamed by the upgrade and the /etc/network/interfaces needs to be modified.
 
Jun 29 17:41:05 proxmox cgroup-network[152099]: running: exec /usr/libexec/netdata/plugins.d/cgroup-network-helper.sh --cgroup '/sys/fs/cgroup/qemu.slice/201.scope'
do you use some kind of monitoring tool ? not sure it could impact the vm start, but maybe try to remove it for testing.
 
Hello, can you also post the output of ip addr? It is possible that some network interfaces were renamed by the upgrade and the /etc/network/interfaces needs to be modified.
Seems there are more than a few examples of this effect in the form. This is a common debian problem these days. Would there be a way to add something to the 7to8 script? Maybe adding *.link files?

Here is a couple of examples in one link.
https://forum.openmediavault.org/in...rnel-6-2-won-t-boot/&postID=355670#post355670
Thanks
 
I removed the Network Device from the VM in the Proxmox GUI, and booted again - same result, same errors.

Output of pveversion -v:

Code:
root@proxmox:~# pveversion -v
proxmox-ve: 8.0.1 (running kernel: 6.2.16-3-pve)
pve-manager: 8.0.3 (running version: 8.0.3/bbf3993334bfa916)
pve-kernel-6.2: 8.0.2
pve-kernel-5.15: 7.4-4
pve-kernel-5.13: 7.1-9
pve-kernel-6.2.16-3-pve: 6.2.16-3
pve-kernel-5.0: 6.0-11
pve-kernel-5.15.108-1-pve: 5.15.108-1
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.0
libpve-access-control: 8.0.3
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.5
libpve-guest-common-perl: 5.0.3
libpve-http-server-perl: 5.0.3
libpve-rs-perl: 0.8.3
libpve-storage-perl: 8.0.2
libqb0: 1.0.5-1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
openvswitch-switch: residual config
proxmox-backup-client: 3.0.1-1
proxmox-backup-file-restore: 3.0.1-1
proxmox-kernel-helper: 8.0.2
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.1
proxmox-widget-toolkit: 4.0.5
pve-cluster: 8.0.1
pve-container: 5.0.4
pve-docs: 8.0.4
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.2
pve-firmware: 3.7-1
pve-ha-manager: 4.0.2
pve-i18n: 3.0.4
pve-qemu-kvm: 8.0.2-3
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.6
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1

Output of qm config 201 --current (after I deleted the Network Device on this VM):

Code:
root@proxmox:~# qm config 201 --current
agent: 1
bootdisk: virtio0
cores: 16
cpu: host
memory: 16384
name: docker
numa: 0
onboot: 1
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=dedd51b8-827d-499b-b82a-816512699db6
sockets: 1
usb0: host=1a6e:089a,usb3=1
usb1: host=18d1:9302,usb3=1
virtio0: zfs_ssd_vm:vm-201-disk-0,cache=writeback,size=512G
vmgenid: 362f642d-05b1-4f88-bc05-5b2cdefc5f45

Output of ip addr (VM 201 is not running currently):

Code:
root@proxmox:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
    link/ether ac:1f:6b:78:f7:8c brd ff:ff:ff:ff:ff:ff
    altname enp197s0
3: eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
    link/ether ac:1f:6b:78:f7:8c brd ff:ff:ff:ff:ff:ff permaddr ac:1f:6b:78:f7:8d
    altname enp198s0
4: enp1s0f0np0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc mq master vmbr2 state UP group default qlen 1000
    link/ether 0c:42:a1:98:aa:1c brd ff:ff:ff:ff:ff:ff
5: enp1s0f1np1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc mq master vmbr5 state UP group default qlen 1000
    link/ether 0c:42:a1:98:aa:1d brd ff:ff:ff:ff:ff:ff
6: enp67s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9216 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 00:02:c9:3b:61:10 brd ff:ff:ff:ff:ff:ff
7: enp67s0d1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9216 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 00:02:c9:3b:61:10 brd ff:ff:ff:ff:ff:ff permaddr 00:02:c9:3b:61:11
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9216 qdisc noqueue state UP group default qlen 1000
    link/ether 00:02:c9:3b:61:10 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::202:c9ff:fe3b:6110/64 scope link
       valid_lft forever preferred_lft forever
9: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ac:1f:6b:78:f7:8c brd ff:ff:ff:ff:ff:ff
    inet 192.168.3.3/24 scope global bond1
       valid_lft forever preferred_lft forever
    inet6 2607:ada1:a3a0:be30::3/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::ae1f:6bff:fe78:f78c/64 scope link
       valid_lft forever preferred_lft forever
10: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue state UP group default qlen 1000
    link/ether 0c:42:a1:98:aa:1c brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.2/24 scope global vmbr2
       valid_lft forever preferred_lft forever
    inet6 2607:ada1:a3a0:be00::aaaa/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::e42:a1ff:fe98:aa1c/64 scope link
       valid_lft forever preferred_lft forever
11: vmbr5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue state UP group default qlen 1000
    link/ether 0c:42:a1:98:aa:1d brd ff:ff:ff:ff:ff:ff
    inet 192.168.5.11/24 scope global vmbr5
       valid_lft forever preferred_lft forever
    inet6 fe80::e42:a1ff:fe98:aa1d/64 scope link
       valid_lft forever preferred_lft forever
12: veth100i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master vmbr2 state UP group default qlen 1000
    link/ether 22:22:d2:26:fa:0b brd ff:ff:ff:ff:ff:ff link-netnsid 0
13: veth101i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master vmbr2 state UP group default qlen 1000
    link/ether 8e:75:ad:63:e7:3c brd ff:ff:ff:ff:ff:ff link-netnsid 1
14: veth102i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master fwbr102i0 state UP group default qlen 1000
    link/ether fe:74:fa:2c:6b:9d brd ff:ff:ff:ff:ff:ff link-netnsid 2
15: fwbr102i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue state UP group default qlen 1000
    link/ether 7e:cc:d5:65:3f:3e brd ff:ff:ff:ff:ff:ff
16: fwpr102p0@fwln102i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master vmbr2 state UP group default qlen 1000
    link/ether 7e:69:87:c0:14:b9 brd ff:ff:ff:ff:ff:ff
17: fwln102i0@fwpr102p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master fwbr102i0 state UP group default qlen 1000
    link/ether 06:05:35:e5:c8:46 brd ff:ff:ff:ff:ff:ff
18: veth103i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master fwbr103i0 state UP group default qlen 1000
    link/ether 7a:c7:18:58:0b:33 brd ff:ff:ff:ff:ff:ff link-netnsid 3
19: fwbr103i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue state UP group default qlen 1000
    link/ether ce:4f:a1:51:af:1a brd ff:ff:ff:ff:ff:ff
20: fwpr103p0@fwln103i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master vmbr2 state UP group default qlen 1000
    link/ether b6:43:43:26:79:9e brd ff:ff:ff:ff:ff:ff
21: fwln103i0@fwpr103p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master fwbr103i0 state UP group default qlen 1000
    link/ether 6e:fc:6d:ad:45:0a brd ff:ff:ff:ff:ff:ff
22: veth105i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master vmbr2 state UP group default qlen 1000
    link/ether 12:39:9e:8f:0b:49 brd ff:ff:ff:ff:ff:ff link-netnsid 4
23: veth106i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master vmbr2 state UP group default qlen 1000
    link/ether 46:07:58:b7:39:e6 brd ff:ff:ff:ff:ff:ff link-netnsid 5
24: veth106i1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master vmbr5 state UP group default qlen 1000
    link/ether 3e:c4:95:15:00:53 brd ff:ff:ff:ff:ff:ff link-netnsid 5
25: veth107i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master fwbr107i0 state UP group default qlen 1000
    link/ether fe:dd:dc:7b:2d:fb brd ff:ff:ff:ff:ff:ff link-netnsid 6
26: fwbr107i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue state UP group default qlen 1000
    link/ether 1a:3e:5b:a6:f4:d1 brd ff:ff:ff:ff:ff:ff
27: fwpr107p0@fwln107i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master vmbr2 state UP group default qlen 1000
    link/ether 5e:16:d7:b2:95:04 brd ff:ff:ff:ff:ff:ff
28: fwln107i0@fwpr107p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master fwbr107i0 state UP group default qlen 1000
    link/ether 82:5e:9c:3b:bc:40 brd ff:ff:ff:ff:ff:ff
29: veth110i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master fwbr110i0 state UP group default qlen 1000
    link/ether ca:f6:07:de:8d:12 brd ff:ff:ff:ff:ff:ff link-netnsid 7
30: fwbr110i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue state UP group default qlen 1000
    link/ether 62:7a:8e:e2:8e:ee brd ff:ff:ff:ff:ff:ff
31: fwpr110p0@fwln110i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master vmbr2 state UP group default qlen 1000
    link/ether ba:5f:2c:46:5e:92 brd ff:ff:ff:ff:ff:ff
32: fwln110i0@fwpr110p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master fwbr110i0 state UP group default qlen 1000
    link/ether 32:a5:d5:19:f5:1d brd ff:ff:ff:ff:ff:ff
33: veth111i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master fwbr111i0 state UP group default qlen 1000
    link/ether 96:df:38:17:50:2e brd ff:ff:ff:ff:ff:ff link-netnsid 8
34: fwbr111i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue state UP group default qlen 1000
    link/ether 3e:e1:8a:03:74:39 brd ff:ff:ff:ff:ff:ff
35: fwpr111p0@fwln111i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master vmbr2 state UP group default qlen 1000
    link/ether 4a:17:29:f3:1b:7b brd ff:ff:ff:ff:ff:ff
36: fwln111i0@fwpr111p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master fwbr111i0 state UP group default qlen 1000
    link/ether 7a:07:88:d6:d1:5c brd ff:ff:ff:ff:ff:ff
41: veth116i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master fwbr116i0 state UP group default qlen 1000
    link/ether d2:a3:b6:a7:2d:2f brd ff:ff:ff:ff:ff:ff link-netnsid 10
42: fwbr116i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue state UP group default qlen 1000
    link/ether b2:4c:05:fd:3d:a2 brd ff:ff:ff:ff:ff:ff
43: fwpr116p0@fwln116i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master vmbr2 state UP group default qlen 1000
    link/ether 8e:ca:3f:31:97:18 brd ff:ff:ff:ff:ff:ff
44: fwln116i0@fwpr116p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master fwbr116i0 state UP group default qlen 1000
    link/ether be:9f:bb:e0:cf:50 brd ff:ff:ff:ff:ff:ff
45: veth117i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master fwbr117i0 state UP group default qlen 1000
    link/ether fa:cd:3b:69:e3:d7 brd ff:ff:ff:ff:ff:ff link-netnsid 11
46: fwbr117i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue state UP group default qlen 1000
    link/ether 1a:c0:a9:5e:bc:12 brd ff:ff:ff:ff:ff:ff
47: fwpr117p0@fwln117i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master vmbr2 state UP group default qlen 1000
    link/ether 02:54:2a:db:22:8b brd ff:ff:ff:ff:ff:ff
48: fwln117i0@fwpr117p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master fwbr117i0 state UP group default qlen 1000
    link/ether 96:17:0c:3e:14:31 brd ff:ff:ff:ff:ff:ff
58: veth112i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master fwbr112i0 state UP group default qlen 1000
    link/ether fe:93:f5:52:4d:64 brd ff:ff:ff:ff:ff:ff link-netnsid 9
59: fwbr112i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue state UP group default qlen 1000
    link/ether a6:7a:6c:0b:2d:f4 brd ff:ff:ff:ff:ff:ff
60: fwpr112p0@fwln112i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master vmbr2 state UP group default qlen 1000
    link/ether 56:aa:5a:17:98:a3 brd ff:ff:ff:ff:ff:ff
61: fwln112i0@fwpr112p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master fwbr112i0 state UP group default qlen 1000
    link/ether e6:ff:6c:25:00:16 brd ff:ff:ff:ff:ff:ff

I have installed Netdata on this Proxmox host, a popular system monitoring tool, but I have had that running without issues for many years now. I could try to remove it for testing soon if that is interfering with the VMs for some reason in Proxmox v8.
Any other ideas of what the problem could be?
 
Output of qm config 201 --current (after I deleted the Network Device on this VM):

Code:
root@proxmox:~# qm config 201 --current
agent: 1
bootdisk: virtio0
cores: 16
cpu: host
memory: 16384
name: docker
numa: 0
onboot: 1
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=dedd51b8-827d-499b-b82a-816512699db6
sockets: 1
usb0: host=1a6e:089a,usb3=1
usb1: host=18d1:9302,usb3=1
virtio0: zfs_ssd_vm:vm-201-disk-0,cache=writeback,size=512G
vmgenid: 362f642d-05b1-4f88-bc05-5b2cdefc5f45
If it's not network-related, maybe it has to do with the USB devices? Or does the issue also appear for a VM without them?
 
If it's not network-related, maybe it has to do with the USB devices? Or does the issue also appear for a VM without them?
I just tried booting VM 201 without any network device as well as no USB devices - still the same timeout error, and can't even open the console.

EDIT: it was the ZFS writeback cache that caused the issue. I can now boot the VM without the timeout error, but it's currently just stuck at "Booting from harddisk"...

My ZFS pool is completely fine and I didn't touch it other than upgrading the host to Proxmox 8 - but it seems to have something to do with ZFS? How can I get it to boot again?
 
Last edited:
I'm not able to reproduce the issue, enabling writeback cache seems to not cause any issues for me. Can you share the output of fdisk -l /dev/zvol/<name-of-your-zfs>/vm-201-disk-0 and zpool status -v? I'd also check the health of the phyiscal disks with e.g. smartctl to make sure.
 
Output of fdisk -l /dev/zvol/ssd/vm/vm-201-disk-0:

Code:
Disk /dev/zvol/ssd/vm/vm-201-disk-0: 512 GiB, 549755813888 bytes, 1073741824 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0x30d505f9

Device                           Boot      Start        End    Sectors  Size Id Type
/dev/zvol/ssd/vm/vm-201-disk-0p1 *          2048 1056964607 1056962560  504G 83 Linux
/dev/zvol/ssd/vm/vm-201-disk-0p2      1056966654 1073739775   16773122    8G  5 Extended
/dev/zvol/ssd/vm/vm-201-disk-0p5      1056966656 1073739775   16773120    8G 82 Linux swap / Solaris

Partition 2 does not start on physical sector boundary.
zpool status -v shows 0 errors with the pool. It has worked just fine on Proxmox 7, so nothing has changed other than the upgrade from Proxmox 7 to 8.

Any other things I could try with ZFS to allow the VMs to properly boot again?
 
EDIT: it was the ZFS writeback cache that caused the issue. I can now boot the VM without the timeout error, but it's currently just stuck at "Booting from harddisk"...
Is there anything in the system log this time?

While I'm really not sure it'll help, you could still try changing VM settings like CPU type, core count or attaching the disk to scsi0 instead (don't forget to also update the boot order in the VM's Options tab)?

Since the partitions seem fine, you can mount them on the host and check if everything looks okay. In the worst case, you'd need to copy the data from there to a new VM, really not sure why it doesn't boot for you.
 
When I try booting the VM, this is in the logs:

Code:
Jul 26 00:33:37 proxmox qm[2664967]: <root@pam> starting task UPID:proxmox:0028E0DB:0005596E:64C0B031:qmstart:201:root@pam:
Jul 26 00:33:37 proxmox qm[2679003]: start VM 201: UPID:proxmox:0028E0DB:0005596E:64C0B031:qmstart:201:root@pam:
Jul 26 00:33:37 proxmox systemd[1]: Started 201.scope.
Jul 26 00:33:37 proxmox qm[2664967]: <root@pam> end task UPID:proxmox:0028E0DB:0005596E:64C0B031:qmstart:201:root@pam: OK
Jul 26 00:33:39 proxmox cgroup-network[2734868]: running: exec /usr/libexec/netdata/plugins.d/cgroup-network-helper.sh --cgroup '/sys/fs/cgroup/qemu.slice/201.scope'
Jul 26 00:33:39 proxmox cgroup-network[2734868]: child pid 2734917 exited with code 1.
Jul 26 00:33:55 proxmox pvedaemon[3139843]: starting vnc proxy UPID:proxmox:002FE903:000560A0:64C0B043:vncproxy:201:root@pam:
Jul 26 00:33:55 proxmox pvedaemon[10539]: <root@pam> starting task UPID:proxmox:002FE903:000560A0:64C0B043:vncproxy:201:root@pam:
Jul 26 00:33:58 proxmox pvedaemon[10539]: <root@pam> end task UPID:proxmox:002FE903:000560A0:64C0B043:vncproxy:201:root@pam: OK
Jul 26 00:34:02 proxmox pvedaemon[10539]: VM 201 qmp command failed - VM 201 qmp command 'guest-ping' failed - got timeout
Jul 26 00:34:10 proxmox apps.plugin[10270]: Cannot process entries in /proc/3479071/fd (command 'z_rd_int_0')
Jul 26 00:34:16 proxmox apps.plugin[10270]: Cannot process entries in /proc/3630274/fd (command 'z_rd_int_2')

The last 2 lines specifically point to an error with ZFS... but what could that be? That same SSD pool hosts all my LXC containers (in a different dataset, but the same zpool of SSDs).

I discovered that I have a few backups of this VM that I made a few weeks ago - these backups were all successfully completed and were made when everything was running smoothly on Proxmox 7. So I just tried to restore these backups to a new VMID using qmrestore "/path/to/backup/vzdump-qemu-201-2023_06_07-02_38_58.vma.zst" 200 to see if that would fix anything. It didn't - while it made the new VM, when booting it I get the exact same behavior: timeouts and a black screen with cache=writeback, and when nocache is selected it's permanently stuck on "Booting from harddisk"... Exact same issue, exact same behavior. After restoring this VM from the backup, this is what zfs get all gives:

Code:
root@proxmox:~# zfs get all ssd/vm/vm-200-disk-0
NAME                  PROPERTY              VALUE                  SOURCE
ssd/vm/vm-200-disk-0  type                  volume                 -
ssd/vm/vm-200-disk-0  creation              Wed Jul 26  1:39 2023  -
ssd/vm/vm-200-disk-0  used                  141G                   -
ssd/vm/vm-200-disk-0  available             957G                   -
ssd/vm/vm-200-disk-0  referenced            141G                   -
ssd/vm/vm-200-disk-0  compressratio         1.00x                  -
ssd/vm/vm-200-disk-0  reservation           none                   default
ssd/vm/vm-200-disk-0  volsize               512G                   local
ssd/vm/vm-200-disk-0  volblocksize          8K                     default
ssd/vm/vm-200-disk-0  checksum              on                     default
ssd/vm/vm-200-disk-0  compression           off                    default
ssd/vm/vm-200-disk-0  readonly              off                    default
ssd/vm/vm-200-disk-0  createtxg             3080877                -
ssd/vm/vm-200-disk-0  copies                1                      default
ssd/vm/vm-200-disk-0  refreservation        none                   default
ssd/vm/vm-200-disk-0  guid                  2386780696106272564    -
ssd/vm/vm-200-disk-0  primarycache          all                    default
ssd/vm/vm-200-disk-0  secondarycache        all                    default
ssd/vm/vm-200-disk-0  usedbysnapshots       536K                   -
ssd/vm/vm-200-disk-0  usedbydataset         141G                   -
ssd/vm/vm-200-disk-0  usedbychildren        0B                     -
ssd/vm/vm-200-disk-0  usedbyrefreservation  0B                     -
ssd/vm/vm-200-disk-0  logbias               latency                default
ssd/vm/vm-200-disk-0  objsetid              3854                   -
ssd/vm/vm-200-disk-0  dedup                 off                    default
ssd/vm/vm-200-disk-0  mlslabel              none                   default
ssd/vm/vm-200-disk-0  sync                  disabled               inherited from ssd/vm
ssd/vm/vm-200-disk-0  refcompressratio      1.00x                  -
ssd/vm/vm-200-disk-0  written               0                      -
ssd/vm/vm-200-disk-0  logicalused           140G                   -
ssd/vm/vm-200-disk-0  logicalreferenced     140G                   -
ssd/vm/vm-200-disk-0  volmode               default                default
ssd/vm/vm-200-disk-0  snapshot_limit        none                   default
ssd/vm/vm-200-disk-0  snapshot_count        none                   default
ssd/vm/vm-200-disk-0  snapdev               hidden                 default
ssd/vm/vm-200-disk-0  context               none                   default
ssd/vm/vm-200-disk-0  fscontext             none                   default
ssd/vm/vm-200-disk-0  defcontext            none                   default
ssd/vm/vm-200-disk-0  rootcontext           none                   default
ssd/vm/vm-200-disk-0  redundant_metadata    all                    default
ssd/vm/vm-200-disk-0  encryption            off                    default
ssd/vm/vm-200-disk-0  keylocation           none                   default
ssd/vm/vm-200-disk-0  keyformat             none                   default
ssd/vm/vm-200-disk-0  pbkdf2iters           0                      default
root@proxmox:~#


I also can't mount the dataset like you suggested:
Code:
root@proxmox:~# zfs mount ssd/vm/vm-201-disk-0
cannot open 'ssd/vm/vm-201-disk-0': operation not applicable to datasets of this type

Any way to force a mount of the dataset? I definitely need to access the data in order to make new VMs...
 
Last edited:
When I try booting the VM, this is in the logs:

Code:
Jul 26 00:34:10 proxmox apps.plugin[10270]: Cannot process entries in /proc/3479071/fd (command 'z_rd_int_0')
Jul 26 00:34:16 proxmox apps.plugin[10270]: Cannot process entries in /proc/3630274/fd (command 'z_rd_int_2')
That's not something shipped by Proxmox VE. What is apps.plugin? Sounds like the read (z_rd_int_0) from the pool might fail/time out? But why is that software even trying to access the file descriptors from other processes (are those processes the VMs)?

The last 2 lines specifically point to an error with ZFS... but what could that be? That same SSD pool hosts all my LXC containers (in a different dataset, but the same zpool of SSDs).

I discovered that I have a few backups of this VM that I made a few weeks ago - these backups were all successfully completed and were made when everything was running smoothly on Proxmox 7. So I just tried to restore these backups to a new VMID using qmrestore "/path/to/backup/vzdump-qemu-201-2023_06_07-02_38_58.vma.zst" 200 to see if that would fix anything. It didn't - while it made the new VM, when booting it I get the exact same behavior: timeouts and a black screen with cache=writeback, and when nocache is selected it's permanently stuck on "Booting from harddisk"... Exact same issue, exact same behavior. After restoring this VM from the backup, this is what zfs get all gives:
What about a freshly installed VM? Does that also hang?

I also can't mount the dataset like you suggested:
Code:
root@proxmox:~# zfs mount ssd/vm/vm-201-disk-0
cannot open 'ssd/vm/vm-201-disk-0': operation not applicable to datasets of this type

Any way to force a mount of the dataset? I definitely need to access the data in order to make new VMs...
This is not a ZFS filesystem, but a virtual block device. You need to mount the corresponding partition in /dev/zvol/, should be something like /dev/zvol/ssd/vm/vm-201-disk-0-part<N>.
 
Thanks, I got the old VM ZFS partition to mount, so all my files are in tact at least.

I have no idea what the apps.plugin is - perhaps Netdata? FYI, I had Netdata installed on Proxmox 7 for years, and it carried over in the upgrade to Proxmox 8. I have now removed and completely uninstalled Netdata on the host just in case, but it hasn't solved the issue: I get the same errors. Is there a way to find out where apps.plugin comes from if it wasn't from Netdata?

And I just tested - I cannot even create a new VM from scratch. Something in the upgrade path to Proxmox 8 has seriously screwed up VMs on this machine somehow, and I don't know why. This is what the syslog says when creating a new VM and then starting it afterwards:

Code:
Jul 29 16:58:27 proxmox pvedaemon[10539]: <root@pam> starting task UPID:proxmox:002E6049:01EB05A0:64C58B83:qmcreate:206:root@pam:
Jul 29 16:58:28 proxmox pvedaemon[10539]: <root@pam> end task UPID:proxmox:002E6049:01EB05A0:64C58B83:qmcreate:206:root@pam: OK
Jul 29 16:58:40 proxmox pvedaemon[3039917]: start VM 206: UPID:proxmox:002E62AD:01EB0AD0:64C58B90:qmstart:206:root@pam:
Jul 29 16:58:40 proxmox pvedaemon[3029678]: <root@pam> starting task UPID:proxmox:002E62AD:01EB0AD0:64C58B90:qmstart:206:root@pam:
Jul 29 16:58:41 proxmox systemd[1]: Started 206.scope.
Jul 29 16:58:49 proxmox pvedaemon[3019084]: VM 206 qmp command failed - VM 206 qmp command 'query-proxmox-support' failed - got timeout
Jul 29 16:58:52 proxmox pvestatd[10508]: VM 206 qmp command failed - VM 206 qmp command 'query-proxmox-support' failed - unable to connect to VM 206 qmp socket - timeout after 51 retries
Jul 29 16:58:53 proxmox pvestatd[10508]: status update time (8.699 seconds)
Jul 29 16:59:02 proxmox pvestatd[10508]: VM 206 qmp command failed - VM 206 qmp command 'query-proxmox-support' failed - unable to connect to VM 206 qmp socket - timeout after 51 retries
Jul 29 16:59:02 proxmox pvestatd[10508]: status update time (8.653 seconds)
Jul 29 16:59:11 proxmox pvedaemon[3039917]: start failed: command '/usr/bin/kvm -id 206 -name 'dockervm,debug-threads=on' -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/206.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/206.pid -daemonize -smbios 'type=1,uuid=539ae8f6-3699-490e-97fe-88027dc09623' -smp '24,sockets=1,cores=24,maxcpus=24' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc 'unix:/var/run/qemu-server/206.vnc,password=on' -cpu host,+kvm_pv_eoi,+kvm_pv_unhalt -m 16384 -object 'iothread,id=iothread-virtioscsi0' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.3,chassis_nr=3,bus=pci.0,addr=0x5' -device 'vmgenid,guid=3b8e5fee-66e7-4f66-be44-b13c0d411348' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'VGA,id=vga,bus=pci.0,addr=0x2' -chardev 'socket,path=/var/run/qemu-server/206.qga,server=on,wait=off,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:6959e56c8c90' -drive 'file=/var/lib/vz/template/iso/debian_12.1.0.iso,if=none,id=drive-ide2,media=cdrom,aio=io_uring' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101' -device 'virtio-scsi-pci,id=virtioscsi0,bus=pci.3,addr=0x1,iothread=iothread-virtioscsi0' -drive 'file=/dev/zvol/ssd/vm/vm-206-disk-0,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on' -device 'scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap206i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=5E:CB:14:E6:58:B2,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=1024,bootindex=102' -machine 'type=pc+pve0'' failed: got timeout
Jul 29 16:59:11 proxmox pvedaemon[3029678]: <root@pam> end task UPID:proxmox:002E62AD:01EB0AD0:64C58B90:qmstart:206:root@pam: start failed: command '/usr/bin/kvm -id 206 -name 'dockervm,debug-threads=on' -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/206.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/206.pid -daemonize -smbios 'type=1,uuid=539ae8f6-3699-490e-97fe-88027dc09623' -smp '24,sockets=1,cores=24,maxcpus=24' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc 'unix:/var/run/qemu-server/206.vnc,password=on' -cpu host,+kvm_pv_eoi,+kvm_pv_unhalt -m 16384 -object 'iothread,id=iothread-virtioscsi0' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.3,chassis_nr=3,bus=pci.0,addr=0x5' -device 'vmgenid,guid=3b8e5fee-66e7-4f66-be44-b13c0d411348' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'VGA,id=vga,bus=pci.0,addr=0x2' -chardev 'socket,path=/var/run/qemu-server/206.qga,server=on,wait=off,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:6959e56c8c90' -drive 'file=/var/lib/vz/template/iso/debian_12.1.0.iso,if=none,id=drive-ide2,media=cdrom,aio=io_uring' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101' -device 'virtio-scsi-pci,id=virtioscsi0,bus=pci.3,addr=0x1,iothread=iothread-virtioscsi0' -drive 'file=/dev/zvol/ssd/vm/vm-206-disk-0,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on' -device 'scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap206i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=5E:CB:14:E6:58:B2,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=1024,bootindex=102' -machine 'type=pc+pve0'' failed: got timeout
Jul 29 16:59:12 proxmox pvestatd[10508]: VM 206 qmp command failed - VM 206 qmp command 'query-proxmox-support' failed - unable to connect to VM 206 qmp socket - timeout after 51 retries
Jul 29 16:59:13 proxmox pvestatd[10508]: status update time (8.732 seconds)
 
Last edited:
Unfortunately not. Can you start a VM created with the default settings and a disk on the ZFS? Can you start a VM created with the default settings and a disk somewhere else?
 
Just test this and nope... I am trying to install a default settings new VM based on Debian 12, on a standard LVM pool (so not ZFS) and I'm having the exact same issues.

I cannot use any VMs at the moment, neither on ZFS or LVM, neither newly created or already existing. How to fix this @fiona ?
 
What you could still try is use qm showcmd <ID> --pretty > start-vm.sh replacing <ID> with an actual VM's ID and then run bash start-vm.sh. That would avoid setting up a systemd scope so we could rule out if the issue is there or not. But with LVM you might need to activate the volumes manually first.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!