vm with pci passthrough fails with "failed: got timeout"

anaxagoras

Renowned Member
Aug 23, 2012
42
6
73
I have a vm with an hba doing pci passthrough to truenas. I started having issues after i added a second hba with an identical chipset, but did not want passthrough for the 2nd hba, i did some trickery binding the bus address to use the vfio driver per this post, i created the referenced vfio_bind script in the post linked within the post:
https://forum.proxmox.com/threads/p...h-have-the-same-device-id.117542/#post-508720

if i remove the pci passthrough config the vm boots fine. I managed to get it boot once with the pci passthrough enabled by trying to commands i read using the cli and doing a set lock suspend, but it seems to have only worked once. i can't seem to replicate it.

vm config
Code:
root@pve1:~# qm config 101
agent: 1
balloon: 0
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 4
cpu: host
efidisk0: local-zfs:vm-101-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
hostpci0: 0000:03:00,pcie=1
ide2: none,media=cdrom
machine: q35
memory: 32736
meta: creation-qemu=6.2.0,ctime=1667261155
name: TrueNAS
net0: virtio=C2:66:C1:B4:B1:81,bridge=vmbr1
net1: virtio=7E:61:2A:D5:18:D9,bridge=vmbr1,tag=110
numa: 0
ostype: other
scsi0: local-zfs:vm-101-disk-1,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=77a815e6-1c6d-48bb-862f-cae8d0c4cccf
sockets: 1
vga: qxl
vmgenid: 66490629-97ed-4d4c-bf0a-ce60c47f18dd


relevant output from lspci -v
Code:
0000:03:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 [1000:0097] (rev 02)
        DeviceName: Integrated RAID
        Subsystem: Dell HBA330 Mini [1028:1f53]
        Flags: fast devsel, IRQ 329, NUMA node 0, IOMMU group 29
        I/O ports at 2000 [size=256]
        Memory at b5700000 (64-bit, non-prefetchable) [size=64K]
        Memory at b4600000 (64-bit, non-prefetchable) [size=1M]
        Expansion ROM at <ignored> [disabled]
        Capabilities: [50] Power Management version 3
        Capabilities: [68] Express Endpoint, MSI 00
        Capabilities: [a8] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [c0] MSI-X: Enable- Count=96 Masked-
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [1e0] Secondary PCI Express
        Capabilities: [1c0] Power Budgeting <?>
        Capabilities: [190] Dynamic Power Allocation <?>
        Capabilities: [150] Single Root I/O Virtualization (SR-IOV)
        Capabilities: [148] Alternative Routing-ID Interpretation (ARI)
        Kernel driver in use: vfio-pci
        Kernel modules: mpt3sas

0000:84:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 [1000:0097] (rev 02)
        Subsystem: Broadcom / LSI SAS9300-8i [1000:30e0]
        Flags: bus master, fast devsel, latency 0, IRQ 41, NUMA node 1, IOMMU group 83
        I/O ports at 8000 [size=256]
        Memory at c8040000 (64-bit, non-prefetchable) [size=64K]
        Memory at c8000000 (64-bit, non-prefetchable) [size=256K]
        Expansion ROM at <ignored> [disabled]
        Capabilities: [50] Power Management version 3
        Capabilities: [68] Express Endpoint, MSI 00
        Capabilities: [a8] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [c0] MSI-X: Enable+ Count=96 Masked-
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [1e0] Secondary PCI Express
        Capabilities: [1c0] Power Budgeting <?>
        Capabilities: [190] Dynamic Power Allocation <?>
        Capabilities: [148] Alternative Routing-ID Interpretation (ARI)
        Kernel driver in use: mpt3sas
        Kernel modules: mpt3sas

trying to start vm from cli
Code:
root@pve1:~# qm start 101
start failed: command '/usr/bin/kvm -id 101 -name TrueNAS -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/101.pid -daemonize -smbios 'type=1,uuid=77a815e6-1c6d-48bb-862f-cae8d0c4cccf' -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd' -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,size=540672,file=/dev/zvol/rpool/data/vm-101-disk-0' -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc 'unix:/var/run/qemu-server/101.vnc,password=on' -cpu host,+kvm_pv_eoi,+kvm_pv_unhalt -m 32736 -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg -device 'vmgenid,guid=66490629-97ed-4d4c-bf0a-ce60c47f18dd' -device 'vfio-pci,host=0000:03:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0' -device 'qxl-vga,id=vga,max_outputs=4,bus=pcie.0,addr=0x1' -chardev 'socket,path=/var/run/qemu-server/101.qga,server=on,wait=off,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-serial,id=spice,bus=pci.0,addr=0x9' -chardev 'spicevmc,id=vdagent,name=vdagent' -device 'virtserialport,chardev=vdagent,name=com.redhat.spice.0' -spice 'tls-port=61002,addr=127.0.0.1,tls-ciphers=HIGH,seamless-migration=on' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:831f5732a8bf' -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/dev/zvol/rpool/data/vm-101-disk-1,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=C2:66:C1:B4:B1:81,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=102' -netdev 'type=tap,id=net1,ifname=tap101i1,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=7E:61:2A:D5:18:D9,netdev=net1,bus=pci.0,addr=0x13,id=net1' -machine 'type=q35+pve0'' failed: got timeout
 
Last edited:
log from journalctl as vm is starting up

Code:
Nov 06 10:38:28 pve1 qm[3819359]: <root@pam> starting task UPID:pve1:003A47A5:00E817AD:6367D4F4:qmstart:101:root@pam:
Nov 06 10:38:28 pve1 qm[3819429]: start VM 101: UPID:pve1:003A47A5:00E817AD:6367D4F4:qmstart:101:root@pam:
Nov 06 10:38:28 pve1 systemd[1]: Started 101.scope.
Nov 06 10:38:28 pve1 systemd-udevd[3819442]: Using default interface naming scheme 'v247'.
Nov 06 10:38:28 pve1 systemd-udevd[3819442]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 06 10:38:29 pve1 kernel: device tap101i0 entered promiscuous mode
Nov 06 10:38:29 pve1 ovs-vsctl[3819447]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap101i0
Nov 06 10:38:29 pve1 ovs-vsctl[3819447]: ovs|00002|db_ctl_base|ERR|no port named tap101i0
Nov 06 10:38:29 pve1 ovs-vsctl[3819448]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln101i0
Nov 06 10:38:29 pve1 ovs-vsctl[3819448]: ovs|00002|db_ctl_base|ERR|no port named fwln101i0
Nov 06 10:38:29 pve1 kernel: vmbr1: port 3(tap101i0) entered blocking state
Nov 06 10:38:29 pve1 kernel: vmbr1: port 3(tap101i0) entered disabled state
Nov 06 10:38:29 pve1 kernel: vmbr1: port 3(tap101i0) entered blocking state
Nov 06 10:38:29 pve1 kernel: vmbr1: port 3(tap101i0) entered forwarding state
Nov 06 10:38:29 pve1 systemd-udevd[3819445]: Using default interface naming scheme 'v247'.
Nov 06 10:38:29 pve1 systemd-udevd[3819445]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 06 10:38:29 pve1 kernel: device tap101i1 entered promiscuous mode
Nov 06 10:38:29 pve1 ovs-vsctl[3819463]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap101i1
Nov 06 10:38:29 pve1 ovs-vsctl[3819463]: ovs|00002|db_ctl_base|ERR|no port named tap101i1
Nov 06 10:38:29 pve1 ovs-vsctl[3819464]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln101i1
Nov 06 10:38:29 pve1 ovs-vsctl[3819464]: ovs|00002|db_ctl_base|ERR|no port named fwln101i1
Nov 06 10:38:29 pve1 kernel: vmbr1v110: port 2(tap101i1) entered blocking state
Nov 06 10:38:29 pve1 kernel: vmbr1v110: port 2(tap101i1) entered disabled state
Nov 06 10:38:29 pve1 kernel: vmbr1v110: port 2(tap101i1) entered blocking state
Nov 06 10:38:29 pve1 kernel: vmbr1v110: port 2(tap101i1) entered forwarding state
Nov 06 10:38:35 pve1 pvedaemon[3530980]: VM 101 qmp command failed - VM 101 qmp command 'guest-ping' failed - got timeout
Nov 06 10:38:35 pve1 pvedaemon[3811662]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 31 retries
Nov 06 10:38:35 pve1 pvedaemon[3567730]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 31 retries
Nov 06 10:38:41 pve1 pvestatd[3231]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 31 retries
Nov 06 10:38:41 pve1 pvestatd[3231]: status update time (6.093 seconds)
Nov 06 10:38:41 pve1 pvedaemon[3811662]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 31 retries
Nov 06 10:38:41 pve1 pvedaemon[3811662]: <root@pam> starting task UPID:pve1:003A4961:00E81CF6:6367D501:vncproxy:101:root@pam:
Nov 06 10:38:41 pve1 pvedaemon[3819873]: starting vnc proxy UPID:pve1:003A4961:00E81CF6:6367D501:vncproxy:101:root@pam:
Nov 06 10:38:45 pve1 qm[3819875]: VM 101 qmp command failed - VM 101 qmp command 'set_password' failed - unable to connect to VM 101 qmp socket - timeout after 31 retries
Nov 06 10:38:45 pve1 pvedaemon[3819873]: Failed to run vncproxy.
Nov 06 10:38:45 pve1 pvedaemon[3811662]: <root@pam> end task UPID:pve1:003A4961:00E81CF6:6367D501:vncproxy:101:root@pam: Failed to run vncproxy.
Nov 06 10:38:51 pve1 pvestatd[3231]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 31 retries
Nov 06 10:38:51 pve1 pvestatd[3231]: status update time (6.097 seconds)
Nov 06 10:38:54 pve1 pvedaemon[3811662]: VM 101 qmp command failed - VM 101 qmp command 'guest-ping' failed - got timeout
Nov 06 10:38:54 pve1 pvedaemon[3530980]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 31 retries
Nov 06 10:38:55 pve1 pvedaemon[3567730]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 31 retries
Nov 06 10:38:59 pve1 qm[3819429]: start failed: command '/usr/bin/kvm -id 101 -name TrueNAS -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/101.pid -daemonize -smbios 'type=1,uuid=77a815e6-1c6d-48bb-862f-cae8d0c4cccf' -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd' -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,size=540672,file=/dev/zvol/rpool/data/vm-101-disk-0' -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc 'unix:/var/run/qemu-server/101.vnc,password=on' -cpu host,+kvm_pv_eoi,+kvm_pv_unhalt -m 32736 -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg -device 'vmgenid,guid=66490629-97ed-4d4c-bf0a-ce60c47f18dd' -device 'vfio-pci,host=0000:03:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0' -device 'qxl-vga,id=vga,max_outputs=4,bus=pcie.0,addr=0x1' -chardev 'socket,path=/var/run/qemu-server/101.qga,server=on,wait=off,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-serial,id=spice,bus=pci.0,addr=0x9' -chardev 'spicevmc,id=vdagent,name=vdagent' -device 'virtserialport,chardev=vdagent,name=com.redhat.spice.0' -spice 'tls-port=61002,addr=127.0.0.1,tls-ciphers=HIGH,seamless-migration=on' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:831f5732a8bf' -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/dev/zvol/rpool/data/vm-101-disk-1,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=C2:66:C1:B4:B1:81,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=102' -netdev 'type=tap,id=net1,ifname=tap101i1,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=7E:61:2A:D5:18:D9,netdev=net1,bus=pci.0,addr=0x13,id=net1' -machine 'type=q35+pve0'' failed: got timeout
Nov 06 10:38:59 pve1 qm[3819359]: <root@pam> end task UPID:pve1:003A47A5:00E817AD:6367D4F4:qmstart:101:root@pam: start failed: command '/usr/bin/kvm -id 101 -name TrueNAS -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/101.pid -daemonize -smbios 'type=1,uuid=77a815e6-1c6d-48bb-862f-cae8d0c4cccf' -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd' -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,size=540672,file=/dev/zvol/rpool/data/vm-101-disk-0' -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc 'unix:/var/run/qemu-server/101.vnc,password=on' -cpu host,+kvm_pv_eoi,+kvm_pv_unhalt -m 32736 -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg -device 'vmgenid,guid=66490629-97ed-4d4c-bf0a-ce60c47f18dd' -device 'vfio-pci,host=0000:03:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0' -device 'qxl-vga,id=vga,max_outputs=4,bus=pcie.0,addr=0x1' -chardev 'socket,path=/var/run/qemu-server/101.qga,server=on,wait=off,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-serial,id=spice,bus=pci.0,addr=0x9' -chardev 'spicevmc,id=vdagent,name=vdagent' -device 'virtserialport,chardev=vdagent,name=com.redhat.spice.0' -spice 'tls-port=61002,addr=127.0.0.1,tls-ciphers=HIGH,seamless-migration=on' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:831f5732a8bf' -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/dev/zvol/rpool/data/vm-101-disk-1,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=C2:66:C1:B4:B1:81,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=102' -netdev 'type=tap,id=net1,ifname=tap101i1,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=7E:61:2A:D5:18:D9,netdev=net1,bus=pci.0,addr=0x13,id=net1' -machine 'type=q35+pve0'' failed: got timeout
Nov 06 10:39:01 pve1 pvestatd[3231]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 31 retries
Nov 06 10:39:01 pve1 pvestatd[3231]: status update time (6.086 seconds)
Nov 06 10:39:07 pve1 pvedaemon[3567730]: worker exit
Nov 06 10:39:07 pve1 pvedaemon[3252]: worker 3567730 finished
Nov 06 10:39:07 pve1 pvedaemon[3252]: starting 1 worker(s)
Nov 06 10:39:07 pve1 pvedaemon[3252]: worker 3820854 started
Nov 06 10:39:11 pve1 pvestatd[3231]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 31 retries
Nov 06 10:39:11 pve1 pvestatd[3231]: status update time (6.094 seconds)
Nov 06 10:39:13 pve1 pvedaemon[3820854]: VM 101 qmp command failed - VM 101 qmp command 'guest-ping' failed - unable to connect to VM 101 qga socket - timeout after 31 retries
Nov 06 10:39:13 pve1 pvedaemon[3530980]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 31 retries
Nov 06 10:39:15 pve1 pvedaemon[3811662]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 31 retries
Nov 06 10:39:21 pve1 pvestatd[3231]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 31 retries
Nov 06 10:39:21 pve1 pvestatd[3231]: status update time (6.092 seconds)
 
I'm not seeing any passthrough errors in the log. That usually means that you are getting a time-out on pinning all of the VM memory into actual host memory (which passthrough requires because of DMA). Try starting the VM with less memory: try 8192MB for example.
 
I'm not seeing any passthrough errors in the log. That usually means that you are getting a time-out on pinning all of the VM memory into actual host memory (which passthrough requires because of DMA). Try starting the VM with less memory: try 8192MB for example.
i just tried with 4gb and the same issue. i have tried with ballooning on and off.
 
Ok, i tried running the kvm command manually as i saw suggested in a post i found from 2016.

Code:
root@pve1:~# time /usr/bin/kvm -id 101 -name TrueNAS -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/101.pid -daemonize -smbios 'type=1,uuid=77a815e6-1c6d-48bb-862f-cae8d0c4cccf' -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd' -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,size=540672,file=/dev/zvol/rpool/data/vm-101-disk-0' -smp '4,sockets=2,cores=2,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc 'unix:/var/run/qemu-server/101.vnc,password=on' -cpu host,+kvm_pv_eoi,+kvm_pv_unhalt -m 4096 -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg -device 'vmgenid,guid=66490629-97ed-4d4c-bf0a-ce60c47f18dd' -device 'vfio-pci,host=0000:03:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0,rombar=0' -device 'qxl-vga,id=vga,max_outputs=4,bus=pcie.0,addr=0x1' -device 'virtio-serial,id=spice,bus=pci.0,addr=0x9' -chardev 'spicevmc,id=vdagent,name=vdagent' -device 'virtserialport,chardev=vdagent,name=com.redhat.spice.0' -spice 'tls-port=61002,addr=127.0.0.1,tls-ciphers=HIGH,seamless-migration=on' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:831f5732a8bf' -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/dev/zvol/rpool/data/vm-101-disk-1,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=C2:66:C1:B4:B1:81,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=102' -netdev 'type=tap,id=net1,ifname=tap101i1,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=7E:61:2A:D5:18:D9,netdev=net1,bus=pci.0,addr=0x13,id=net1' -machine 'type=q35+pve0'
real    1m7.652s
user    0m0.044s
sys     0m0.008s

it took 1 minute 7s to boot a system with 4096mb of ram.
 
You can use qm show 101 to give you the command line, if you wan to start it in the exact same way as Proxmox.

EDIT: What hardware (CPU, memory) are you using to run that VM? What storage do you use? Is your CPU fan working? Maybe the CPU is really slow because of thermal throttling?
 
Last edited:
tried it again with 32gb of ram

Code:
root@pve1:~# time /usr/bin/kvm -id 101 -name TrueNAS -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/101.pid -daemonize -smbios 'type=1,uuid=77a815e6-1c6d-48bb-862f-cae8d0c4cccf' -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd' -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,size=540672,file=/dev/zvol/rpool/data/vm-101-disk-0' -smp '4,sockets=2,cores=2,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc 'unix:/var/run/qemu-server/101.vnc,password=on' -cpu host,+kvm_pv_eoi,+kvm_pv_unhalt -m 32768 -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg -device 'vmgenid,guid=66490629-97ed-4d4c-bf0a-ce60c47f18dd' -device 'vfio-pci,host=0000:03:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0,rombar=0' -device 'qxl-vga,id=vga,max_outputs=4,bus=pcie.0,addr=0x1' -device 'virtio-serial,id=spice,bus=pci.0,addr=0x9' -chardev 'spicevmc,id=vdagent,name=vdagent' -device 'virtserialport,chardev=vdagent,name=com.redhat.spice.0' -spice 'tls-port=61002,addr=127.0.0.1,tls-ciphers=HIGH,seamless-migration=on' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:831f5732a8bf' -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/dev/zvol/rpool/data/vm-101-disk-1,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=C2:66:C1:B4:B1:81,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=102' -netdev 'type=tap,id=net1,ifname=tap101i1,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=7E:61:2A:D5:18:D9,netdev=net1,bus=pci.0,addr=0x13,id=net1' -machine 'type=q35+pve0'

real    5m15.102s
user    0m0.039s
sys     0m0.013s

it's an r730xd with 256gb of ecc and dual E5-2687W v4, 12Cores, 3.00GHz

storage at the moment is a single samsung 860 evo ssd. i'm upgrading to nvme tonight with a bifurcated pcie nvme adapter.

looking at thermals interesting 1 cpu is at 70C the other is at 43C and the system is mostly idle right now, i'll try reapplying thermal paste to the cpu's tonight
 
  • Like
Reactions: leesteken
tried it again with 32gb of ram
...
looking at thermals interesting 1 cpu is at 70C the other is at 43C and the system is mostly idle right now, i'll try reapplying thermal paste to the cpu's tonight
As least it matches my experience with time-outs begin memory related, as it increases with increased memory. I have no idea why it is taking so long but I also would not expect such temperature differences.
Since you have two sockets, please enable NUMA in all your VMs and use two virtual sockets in each VM to get the best performance (use half the virtual cores, as they are multiplied by the number of virtual sockets). Also make sure to distribute the physical memory DIMMS over both sockets
 
memory is evenly disturbed already. i didn't have numa on, i just enabled it and rebooted all the VM's. Do i just need to enable numa in proxmox? or are there any config settings i need to make on each guest os? specifically freebsd and ubuntu right now. Eventually i'll be running windows server too.

Edit: The temps are less different now that NUMA is on, it looks like a backup job was running when i looked at the cpu utilization before. Now it's at 47 and 33. maybe not a big deal, i'm not sure, thermal paste is cheap enough that i'll reapply tonight though either way.

Also the truenas image only took 1 minute 33 to start with 32gb of ram this time instead of 5 minues. nothing is giving me any indication in the logs as to why it's taking so long though.
 
Last edited:
memory is evenly disturbed already. i didn't have numa on, i just enabled it and rebooted all the VM's. Do i just need to enable numa in proxmox? or are there any config settings i need to make on each guest os? specifically freebsd and ubuntu right now. Eventually i'll be running windows server too.
... and use two virtual sockets in each VM to get the best performance (use half the virtual cores, as they are multiplied by the number of virtual sockets).
I don't know of any Linux or BSD tweaks for NUMA.
 
so I installed the nvme drives, moved the datastores over and the system is no longer timing out trying to start the vm. the single drive i was using as a crutch until the nvme adapter came in was probably a bottleneck, or maybe just rebooting the host fixed it.

EDIT: I didn't re-paste the cpu yet, i couldn't find my thermal paste.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!