VM won't start help

shuhdonk

Member
Dec 15, 2020
24
0
6
Hey all, I am new to proxmox/linux etc. I built a new server and had it on the bench to learn/setup/test etc. I have a few vms setup that were working just fine on the test bench. But now my truenas vm won't start after installing in the server rack and I am not sure why or where to look to find out why.

I took my old server components out of the rack and replaced it with the new parts plus 6 16tb sata hdds that I was waiting on (arrived today). They are attached to an LSI HBA, which was in the server prior just had no drives attached to it and I had the HBA passthrough to the truenas VM. Well now for some reason the truenas VM won't start. I removed the HBA passthrough and no change, still will not start. Suggestions on how to figure out what the problem is?

Thank you.
 
Whats the actual error message? Can you see anything intersting in the /var/log/syslog right before/after starting the VM?
 
Whats the actual error message? Can you see anything intersting in the /var/log/syslog right before/after starting the VM?
Hey there, this is what is in the log, thank you.

Code:
Feb 15 19:52:18 Ground-0 pvedaemon[6180]: start VM 201: UPID:Ground-0:00001824:0000323C:620C4AC2:qmstart:201:root@pam:
Feb 15 19:52:18 Ground-0 pvedaemon[4599]: <root@pam> starting task UPID:Ground-0:00001824:0000323C:620C4AC2:qmstart:201:root@pam:
Feb 15 19:52:18 Ground-0 kernel: [  129.007381] vfio-pci 0000:81:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=none
Feb 15 19:52:18 Ground-0 kernel: [  129.023629]  sdg: sdg1 sdg2
Feb 15 19:52:18 Ground-0 kernel: [  129.026159] vfio-pci 0000:81:00.0: vgaarb: changed VGA decodes: olddecodes=none,decodes=io+mem:owns=none
Feb 15 19:52:18 Ground-0 kernel: [  129.050094] vfio-pci 0000:81:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=none
Feb 15 19:52:18 Ground-0 systemd[1]: Stopped target Sound Card.
Feb 15 19:52:19 Ground-0 kernel: [  129.499458] sd 10:0:0:0: [sda] Synchronizing SCSI cache
Feb 15 19:52:19 Ground-0 kernel: [  129.499832] sd 10:0:0:0: [sda] Synchronize Cache(10) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Feb 15 19:52:19 Ground-0 kernel: [  129.536298] sd 10:0:1:0: [sdb] Synchronizing SCSI cache
Feb 15 19:52:19 Ground-0 kernel: [  129.536646] sd 10:0:1:0: [sdb] Synchronize Cache(10) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Feb 15 19:52:19 Ground-0 kernel: [  129.584226] sd 10:0:2:0: [sdc] Synchronizing SCSI cache
Feb 15 19:52:19 Ground-0 kernel: [  129.584573] sd 10:0:2:0: [sdc] Synchronize Cache(10) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Feb 15 19:52:19 Ground-0 kernel: [  129.664283] sd 10:0:3:0: [sdd] Synchronizing SCSI cache
Feb 15 19:52:19 Ground-0 kernel: [  129.664633] sd 10:0:3:0: [sdd] Synchronize Cache(10) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Feb 15 19:52:19 Ground-0 kernel: [  129.724209] sd 10:0:4:0: [sde] Synchronizing SCSI cache
Feb 15 19:52:19 Ground-0 kernel: [  129.724556] sd 10:0:4:0: [sde] Synchronize Cache(10) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Feb 15 19:52:19 Ground-0 kernel: [  129.787025] sd 10:0:5:0: [sdf] Synchronizing SCSI cache
Feb 15 19:52:19 Ground-0 kernel: [  129.787393] sd 10:0:5:0: [sdf] Synchronize Cache(10) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Feb 15 19:52:19 Ground-0 kernel: [  129.864225] mpt2sas_cm0: mpt3sas_transport_port_remove: removed: sas_addr(0x4433221101000000)
Feb 15 19:52:19 Ground-0 kernel: [  129.864262] mpt2sas_cm0: removing handle(0x0009), sas_addr(0x4433221101000000)
Feb 15 19:52:19 Ground-0 kernel: [  129.864289] mpt2sas_cm0: enclosure logical id(0x500605b009fe3450), slot(2)
Feb 15 19:52:19 Ground-0 kernel: [  129.864314] mpt2sas_cm0: mpt3sas_transport_port_remove: removed: sas_addr(0x4433221102000000)
Feb 15 19:52:19 Ground-0 kernel: [  129.864339] mpt2sas_cm0: removing handle(0x000a), sas_addr(0x4433221102000000)
Feb 15 19:52:19 Ground-0 kernel: [  129.864359] mpt2sas_cm0: enclosure logical id(0x500605b009fe3450), slot(1)
Feb 15 19:52:19 Ground-0 kernel: [  129.864378] mpt2sas_cm0: mpt3sas_transport_port_remove: removed: sas_addr(0x4433221103000000)
Feb 15 19:52:19 Ground-0 kernel: [  129.864402] mpt2sas_cm0: removing handle(0x000b), sas_addr(0x4433221103000000)
Feb 15 19:52:19 Ground-0 kernel: [  129.864423] mpt2sas_cm0: enclosure logical id(0x500605b009fe3450), slot(0)
Feb 15 19:52:19 Ground-0 kernel: [  129.864442] mpt2sas_cm0: mpt3sas_transport_port_remove: removed: sas_addr(0x4433221105000000)
Feb 15 19:52:19 Ground-0 kernel: [  129.864465] mpt2sas_cm0: removing handle(0x000c), sas_addr(0x4433221105000000)
Feb 15 19:52:19 Ground-0 kernel: [  129.864486] mpt2sas_cm0: enclosure logical id(0x500605b009fe3450), slot(6)
Feb 15 19:52:19 Ground-0 kernel: [  129.864506] mpt2sas_cm0: mpt3sas_transport_port_remove: removed: sas_addr(0x4433221106000000)
Feb 15 19:52:19 Ground-0 kernel: [  129.864529] mpt2sas_cm0: removing handle(0x000d), sas_addr(0x4433221106000000)
Feb 15 19:52:19 Ground-0 kernel: [  129.864550] mpt2sas_cm0: enclosure logical id(0x500605b009fe3450), slot(5)
Feb 15 19:52:19 Ground-0 kernel: [  129.864571] mpt2sas_cm0: mpt3sas_transport_port_remove: removed: sas_addr(0x4433221107000000)
Feb 15 19:52:19 Ground-0 kernel: [  129.864594] mpt2sas_cm0: removing handle(0x000e), sas_addr(0x4433221107000000)
Feb 15 19:52:19 Ground-0 kernel: [  129.864615] mpt2sas_cm0: enclosure logical id(0x500605b009fe3450), slot(4)
Feb 15 19:52:19 Ground-0 kernel: [  129.866039] mpt2sas_cm0: sending message unit reset !!
Feb 15 19:52:19 Ground-0 kernel: [  129.867616] mpt2sas_cm0: message unit reset: SUCCESS
Feb 15 19:52:20 Ground-0 systemd[1]: Created slice qemu.slice.
Feb 15 19:52:20 Ground-0 systemd[1]: Started 201.scope.
Feb 15 19:52:20 Ground-0 systemd-udevd[6183]: Using default interface naming scheme 'v247'.
Feb 15 19:52:20 Ground-0 systemd-udevd[6183]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 15 19:52:21 Ground-0 kernel: [  131.712301] device tap201i0 entered promiscuous mode
Feb 15 19:52:21 Ground-0 systemd-udevd[6183]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 15 19:52:21 Ground-0 systemd-udevd[6183]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 15 19:52:21 Ground-0 systemd-udevd[6184]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 15 19:52:21 Ground-0 systemd-udevd[6184]: Using default interface naming scheme 'v247'.
Feb 15 19:52:21 Ground-0 kernel: [  131.749686] fwbr201i0: port 1(fwln201i0) entered blocking state
Feb 15 19:52:21 Ground-0 kernel: [  131.750578] fwbr201i0: port 1(fwln201i0) entered disabled state
Feb 15 19:52:21 Ground-0 kernel: [  131.753890] device fwln201i0 entered promiscuous mode
Feb 15 19:52:21 Ground-0 kernel: [  131.773350] fwbr201i0: port 1(fwln201i0) entered blocking state
Feb 15 19:52:21 Ground-0 kernel: [  131.789646] fwbr201i0: port 1(fwln201i0) entered forwarding state
Feb 15 19:52:21 Ground-0 kernel: [  131.816815] vmbr0: port 2(fwpr201p0) entered blocking state
Feb 15 19:52:21 Ground-0 kernel: [  131.817750] vmbr0: port 2(fwpr201p0) entered disabled state
Feb 15 19:52:21 Ground-0 kernel: [  131.818562] device fwpr201p0 entered promiscuous mode
Feb 15 19:52:21 Ground-0 kernel: [  131.819339] vmbr0: port 2(fwpr201p0) entered blocking state
Feb 15 19:52:21 Ground-0 kernel: [  131.820016] vmbr0: port 2(fwpr201p0) entered forwarding state
Feb 15 19:52:21 Ground-0 kernel: [  131.824493] fwbr201i0: port 2(tap201i0) entered blocking state
Feb 15 19:52:21 Ground-0 kernel: [  131.825610] fwbr201i0: port 2(tap201i0) entered disabled state
Feb 15 19:52:21 Ground-0 kernel: [  131.826469] fwbr201i0: port 2(tap201i0) entered blocking state
Feb 15 19:52:21 Ground-0 kernel: [  131.827180] fwbr201i0: port 2(tap201i0) entered forwarding state
Feb 15 19:52:27 Ground-0 pvedaemon[4595]: VM 201 qmp command failed - VM 201 qmp command 'query-proxmox-support' failed - got timeout
Feb 15 19:52:28 Ground-0 kernel: [  138.570914] vfio-pci 0000:81:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
Feb 15 19:52:28 Ground-0 kernel: [  138.736220] fwbr201i0: port 2(tap201i0) entered disabled state
Feb 15 19:52:28 Ground-0 kernel: [  138.772074] fwbr201i0: port 1(fwln201i0) entered disabled state
Feb 15 19:52:28 Ground-0 kernel: [  138.790497] vmbr0: port 2(fwpr201p0) entered disabled state
Feb 15 19:52:28 Ground-0 kernel: [  138.808011] device fwln201i0 left promiscuous mode
Feb 15 19:52:28 Ground-0 kernel: [  138.808766] fwbr201i0: port 1(fwln201i0) entered disabled state
Feb 15 19:52:28 Ground-0 kernel: [  138.834173] device fwpr201p0 left promiscuous mode
Feb 15 19:52:28 Ground-0 kernel: [  138.834968] vmbr0: port 2(fwpr201p0) entered disabled state
Feb 15 19:52:29 Ground-0 systemd[1]: 201.scope: Succeeded.
Feb 15 19:52:29 Ground-0 systemd[1]: 201.scope: Consumed 7.782s CPU time.
Feb 15 19:52:29 Ground-0 pvedaemon[6180]: start failed: QEMU exited with code 1
Feb 15 19:52:29 Ground-0 pvedaemon[4599]: <root@pam> end task UPID:Ground-0:00001824:0000323C:620C4AC2:qmstart:201:root@pam: start failed: QEMU exited with code
 
Any kvm error messages in the WebUIs task log when starting the VM (double click on the failed task on the bottom)?
 
Any kvm error messages in the WebUIs task log when starting the VM (double click on the failed task on the bottom)?
kvm: -device vfio-pci,host=0000:41:00.0,id=hostpci1,bus=ich9-pcie-port-2,addr=0x0: vfio 0000:41:00.0: error getting device from group 46: No such device
Verify all devices in group 46 are bound to vfio-<bus> or pci-stub and not already in use
TASK ERROR: start failed: QEMU exited with code 1
 
Ohh, device 41:00.0 is no longer in the list of pci devices, removed it and now the vm has started. I wonder what 41 was before. hmm.

Okay, well the vm starts but not booting.. saying not a bootable disk

**okay, I had to disable the bios on the HBA card and now it appears to be working fine, thanks for pointing me in the right direction.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!