proxmox 5.0.21-5-pve pfSense VM won't boot

I'm seeing this issue with all BSD guests on the host (FreeBSD 11.2 for pfSense and a separate guest running vanilla FreeBSD 12.1). Host is an AMD EPYC 7351. In the past I've had issues booting guests based on BSD > 12.0 but worked around the problem by disabling the high precision event timer. Both problematic guests are BIOS (not UEFI); changing the CPU type from `host` to `kvm64` does allow the guest to boot but with significant performance impacts, this is not a long term solution.


pveversion -v: https://gist.github.com/GitGerby/86c8edda230233289bfc90f68c30e013
CPU Info: https://gist.github.com/GitGerby/7b1de7eb22125bedc1f1476765d692bf
Guest with KVM64 workaround: https://gist.github.com/GitGerby/4ddeb2e0b5e103df5a2ebd6fac397b87
Guest with native cpu: https://gist.github.com/GitGerby/198a10bb612d94c4e0b7f23dd346a2d5
 
Well, my experiment with the new kernel turned out to be a complete disaster. I no longer know what's going on with proxmox, even reverting back to the old kernel doesn't work anymore.
The biggest issue is that I lost the ability to pass-through my network card.
With hostpci0: 01:00,pcie=1 I get TASK ERROR: no pci device info for device '0000:01:00.0'. Proxmox UI for whatever reason now displays all PCI devices with four leading zeros, so if I try to select a device it throws "hostpci0: invalid format - format error hostpci1.host: value does not match the regex pattern" into my face. Looks like I'm going to have to start fresh...
 
Please try 5.3 kernel.

> apt install pve-kernel-5.3

5.3 kernel will be the new default soon.
 
Well, my experiment with the new kernel turned out to be a complete disaster. I no longer know what's going on with proxmox, even reverting back to the old kernel doesn't work anymore.
The biggest issue is that I lost the ability to pass-through my network card.
With hostpci0: 01:00,pcie=1 I get TASK ERROR: no pci device info for device '0000:01:00.0'. Proxmox UI for whatever reason now displays all PCI devices with four leading zeros, so if I try to select a device it throws "hostpci0: invalid format - format error hostpci1.host: value does not match the regex pattern" into my face. Looks like I'm going to have to start fresh...

Was this with the 5.3 test kernel or something different?
 
Was this with the 5.3 test kernel or something different?

5.3 kernel will be the new default soon, already available in all our repository as an option.
 
Last edited:
Ditto. Just ran an update tonight and both my FreeBSD machines would hang on boot.

kernel 5.0.21-5-pve and pve-qemu-kvm 4.1.1-1

I was able to get them to boot again by changing from host cpu type to kvm64. They both seem to be working now, but it remains to be seen if performance is affected due to missing cpu features. I'm not entirely sure how that all works, but I think my opnsense vm likes to have aes capabilities. I don't really know if FreeNAS needs anything specific, but if it turns out to be too slow I may need to try the newer kernel as well if that works.
 
Same issue here; pfSense stuck at booting after latest upgrades (as of Nov 29) when running "Host" CPU configuration (in order to get access to AES-NI). It boots alright using "KVM" CPU configuration. Host CPU on that machine is a Ryzen gen 1.
 
try to update to the 5.3 kernel - these issues should be fixed there
I hope this helps!
 
Just to share - updating the kernel solved my problem where PFsense 2.4.4 installer wouldn't boot up (it was reporting a trap error).

Thanks!
 
  • Like
Reactions: t.lamprecht
Ran the updates today which included the 5.3 kernel and it did indeed fix the issue for me as well.
 
I've experienced issues with system hungs of pfSense 2.4.4-RELEASE-p3 VMs on kernels from 5.0.21-3-pve up to 5.0.21-10-pve, so I have deployed 5.3.7-1-pve and it fixed my issue for the moment.

When referring to pve-kernel-5.3, there are still there versions of kernel 5.3.x available in the repos see:

Code:
$ apt policy pve-kernel-5.3* | grep pve-kernel
pve-kernel-5.3:
pve-kernel-5.3.10-1-pve:
pve-kernel-5.3.1-1-pve:
pve-kernel-5.3.7-1-pve:

I do have different problem with kernel 5.3.7-1-pve and my only option is to upgrade to pve-kernel-5.3.10-1-pve and how it will fix the RCOTI. :p

Hello,
After fix the boot issues with host cpu (amd athlon 5150), i still experience the same random system hanging issues with opnsense 19.7 since the same kernels version as you.
Did the pve-kernel-5.3.10-1-pve fix the hangs issues because not anymore for me in this thread ?
Can you please help with vm configuration for opnsense (freebsd 11.2) ?

Mine is

Code:
balloon: 0
bootdisk: virtio0
cores: 2
cpuunits: 2048
hotplug: disk,network,usb,memory,cpu
ide2: none,media=cdrom
memory: 2048
name: OpnSense
net0: virtio=7A:B8:D3:DA:CE:AD,bridge=vmbr1,firewall=1
net1: virtio=72:7E:83:4E:1C:2F,bridge=vmbr2,firewall=1
numa: 1
onboot: 1
ostype: other
protection: 1
scsihw: virtio-scsi-pci
smbios1: uuid=ecf36be3-de6f-4d39-b50d-680ed6751495
sockets: 1
startup: order=1
tablet: 0
virtio0: ssd:vm-110-disk-0,discard=on,size=10G
vmgenid: 4f141a17-3889-4d4a-bf4e-b87bee9bad32

I already try other configuration
 
Last edited:
I have not had any problems with my opnsense or freenas VMs other than the failure to boot as described in this thread and the problem with the qemu update and the q35 machine type. I'm now running i440fx with pci passthrough instead of pcie and everything is back to working fine as far as I can tell.

Here is my config if it helps. There are quite a few differences. My wan port is passed through as pci so opnsense owns that card. The lan is vmbr0 and shared with other VMs.


Code:
bootdisk: scsi0
cores: 2
cpu: host
hostpci1: 07:00.0
hotplug: disk,network,usb
memory: 4096
name: opnsense
net0: virtio=2A:59:8B:6B:0A:B3,bridge=vmbr0
numa: 0
onboot: 1
ostype: other
scsi0: local-lvm:vm-101-disk-0,size=16G
scsihw: virtio-scsi-pci
smbios1: uuid=e4a7311f-2f0f-4eb1-8b3f-39c51695d486
sockets: 1
startup: order=1
 
I experience the same by going form 5.4 latest to 6-2.12... pfsense wont come online. I changed CPU from host to kvm64, but no significant change.
 
I finally found that the proxmox hwoto 5.x to 6.0 is missing some information:

Upgrade the system to Debian Buster and Proxmox VE 6.0

This action will take some time depending on the system performance - up to 60 min or more. On high-performance servers with SSD storage, the dist-upgrade can be finished in 5 minutes.

Start with this step to get the initial set of upgraded packages:

apt dist-upgrade


During the steps above, you may be asked to approve some of the new packages replacing configuration files. They are not relevant to the Proxmox VE upgrade, so you can choose what you want to do.

Reboot the system in order to use the new PVE kernel


actually the configuration files do interfer with the proxmox VE upgrade. As I had forwarding in the sysctl for the pfsense. After upgrade it was gone.... thats why I had the problems.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!