Problem with Free BSD VMs on latest Version of Proxmox 8.0.4

bellner

Member
Oct 5, 2021
9
3
8
Upperaustria
Today I was unable to start any freeBSD (I tried OpnSense and PfSense) based OS using Proxmox Version 8.0.4 after a server reboot. Both VMs got stuck at boot when loading CAM. I also checked on my second playing field server which also suffered from the issue.
What solved my issues was rolling back the last 8 updates done by apt. I did:
  • apt install libpve-rs-perl=0.8.5
  • apt install pve-qemu-kvm=8.0.2-7
  • apt install libpve-storage-perl=8.0.2
  • apt install ifupdown2=3.2.0-1+pmx5
  • apt install pve-xtermjs=4.16.0-3
  • apt install qemu-server=8.0.7
  • apt install libpve-http-server-perl=5.0.4
  • apt install libpve-common-perl=8.0.9
After executing all of this commands and rebooting I was finally able to run my firewall VMs again.
I don't know which package cause the issue but I still wanted to report it.
 
  • Like
Reactions: locnar1701
Hi bellner,

Thanks for the heads-up!

My firewall had an update waiting that required a reboot. After reading your post, I thought I'd better first see whether a restored backup of the VM would boot on my current PVE 8.0.4 ; it does.

There are some updates on PVE to run. These are the currently installed package versions (with which the OPNSense VM does boot), and the target version for the update:

1700261027934.png

Now install the updates,
The following NEW packages will be installed: proxmox-termproxy The following packages will be upgraded: ceph-common ceph-fuse ifupdown2 libcephfs2 libpve-common-perl libpve-http-server-perl libpve-rs-perl libpve-storage-perl librados2 libradosstriper1 librbd1 librgw2 pve-qemu-kvm pve-xtermjs python3-ceph-argparse python3-ceph-common python3-cephfs python3-rados python3-rbd python3-rgw qemu-server

... and reboot the VM.

The VM boots with no problem. Maybe there's another difference in our configuration that prevents your VM from booting, where it is not a show stopper for mine? The configuration looks like:

1700261581495.png

(you might note both interfaces are connected to the same bridge; I restored to another node to prevent the internet connection from going down in case I didrun into problems)

By the way:
got stuck at boot when loading CAM
I'm fond of TLA's; what's the meaning of this one?
 
Today I was unable to start any freeBSD (I tried OpnSense and PfSense) based OS using Proxmox Version 8.0.4 after a server reboot. Both VMs got stuck at boot when loading CAM. I also checked on my second playing field server which also suffered from the issue.
What solved my issues was rolling back the last 8 updates done by apt. I did:
  • apt install libpve-rs-perl=0.8.5
  • apt install pve-qemu-kvm=8.0.2-7
  • apt install libpve-storage-perl=8.0.2
  • apt install ifupdown2=3.2.0-1+pmx5
  • apt install pve-xtermjs=4.16.0-3
  • apt install qemu-server=8.0.7
  • apt install libpve-http-server-perl=5.0.4
  • apt install libpve-common-perl=8.0.9
After executing all of this commands and rebooting I was finally able to run my firewall VMs again.
I don't know which package cause the issue but I still wanted to report it.
Same thing happen to me and my Pfsense VM.
You are a life saver !

@wbk Before the original post , i restore my pfsense backup and problem remained.
I also tryied to make a new pfsense vm but got the same error (loading CAM thing) and on my new vm booting from pfsense image.
 
Last edited:
Hi bellner,

Thanks for the heads-up!

My firewall had an update waiting that required a reboot. After reading your post, I thought I'd better first see whether a restored backup of the VM would boot on my current PVE 8.0.4 ; it does.

There are some updates on PVE to run. These are the currently installed package versions (with which the OPNSense VM does boot), and the target version for the update:

View attachment 58303

Now install the updates,
The following NEW packages will be installed: proxmox-termproxy The following packages will be upgraded: ceph-common ceph-fuse ifupdown2 libcephfs2 libpve-common-perl libpve-http-server-perl libpve-rs-perl libpve-storage-perl librados2 libradosstriper1 librbd1 librgw2 pve-qemu-kvm pve-xtermjs python3-ceph-argparse python3-ceph-common python3-cephfs python3-rados python3-rbd python3-rgw qemu-server

... and reboot the VM.

The VM boots with no problem. Maybe there's another difference in our configuration that prevents your VM from booting, where it is not a show stopper for mine? The configuration looks like:

View attachment 58304

(you might note both interfaces are connected to the same bridge; I restored to another node to prevent the internet connection from going down in case I didrun into problems)

By the way:

I'm fond of TLA's; what's the meaning of this one?
I tried everything yesterday. Switched from Q35 to i440fx. Removed my pcie passthrough. Created a new VM. Tried it on my second Proxmox host. Tested a serial console. Different CPUs. Different amounts of RAM. Nothing worked only rolling back these updates fixed the issue.


EDIT: Issue during the boot of the VM was „Root mount waiting for: CAM“.
 
Last edited:
Today I was unable to start any freeBSD (I tried OpnSense and PfSense) based OS using Proxmox Version 8.0.4 after a server reboot. Both VMs got stuck at boot when loading CAM. I also checked on my second playing field server which also suffered from the issue.
What solved my issues was rolling back the last 8 updates done by apt. I did:
  • apt install libpve-rs-perl=0.8.5
  • apt install pve-qemu-kvm=8.0.2-7
  • apt install libpve-storage-perl=8.0.2
  • apt install ifupdown2=3.2.0-1+pmx5
  • apt install pve-xtermjs=4.16.0-3
  • apt install qemu-server=8.0.7
  • apt install libpve-http-server-perl=5.0.4
  • apt install libpve-common-perl=8.0.9
After executing all of this commands and rebooting I was finally able to run my firewall VMs again.
I don't know which package cause the issue but I still wanted to report it.
Came here to post about this very same issue. I was able to bring up an older version of 8.0 on a secondary machine I use for temporary migrations when I need to work on my primary lab machine. I ran the updates that Proxmox wanted this morning at 11:42am on 18 November 2023. Thanks for posting this so much! I have a feeling we will not be alone. and I am keeping my FreeBSD based machines on the temp machine until the next upgrade hopefully solves this issue with FreeBSD 14 machines.

I'll report back after the next updates and see if I can migrate my FreeBSD machine back to the "latest" updated machine and get these CAM errors gone.
 
After re-reading all the posts, I think that we have a problem with FreeBSD 14 based systems, as the latest PFSense 2.7 and 2.7.1 installs use FreeBSD 14-current as their base. I am not familiar with Opnsense as of yet, but I have a feeling that people that are on the most current version are on a variety of FreeBSD 14. I upgraded my 13.2 p5 box of FreeBSD last week from source when the git repo got the upgrade release tag. I have had zero issues with the machine itself, until this Proxmox issue popped up. I hope that it is just a regression (looks that way based on my searches) that will be quickly id'ed and fixed. Hopefully FreeBSD is aware of the issue, which is why it they have as yet not posted a time to release FreeBSD 14 to a final release.
 
After re-reading all the posts, I think that we have a problem with FreeBSD 14 based systems,
That would match my OPNsense having no problem, as it is on FreeBSD 13.2... (with OPNsense 23.7.7...)

Welcome to the forums, by the way! :)
 
  • Like
Reactions: locnar1701
I run FreeBSD 14.0 based VMs on the latest PVE versions without problems whatsoever.
What storage backend do you use?
My VMs are using VirtIO SCSI Single with either zfs or ceph as backend.
 
I run FreeBSD 14.0 based VMs on the latest PVE versions without problems whatsoever.
What storage backend do you use?
My VMs are using VirtIO SCSI Single with either zfs or ceph as backend.
I have to correct myself - I did not have the latest packages, but what was on my apt-proxy.
Apologies for that.
I have now updated to the latest as of this morning and now experience the same issues.
However: for me this is NOT restricted to FreeBSD 14 machines, but 13.2 as well.
 
I run FreeBSD 14.0 based VMs on the latest PVE versions without problems whatsoever.
What storage backend do you use?
My VMs are using VirtIO SCSI Single with either zfs or ceph as backend.
I have a freshly installed (3 months) ProxMox 8 VE setup, ZFS with 3 SSD in raidz1, I have tried, after the failure, all of the SCSI virtual setups to no avail. I am still waiting for the next updates to the listed above packages to try and migrate back the VM that runs my FreeBSD system. I am up and running and I can test with little pain, so now it is, for me, a waiting game.
 
Yes can confirm. Neither my PFSense (FreeBSD 14) nor my OPNSense (FreeBSD 13.2) booted.

I restored my OPNsense backup to an up-to-date PVE node and updated OPNsense to the latest version (23.7.8_1).

It boots with no problem.

What can we compare to find the culprit on your systems, or the omission on my system?

I'll be out most of the day, perhaps tonight I can chime in.
 
Last edited:
Hi,
please provide the VM configuration for affected VMs qm config <ID> (replacing <ID> with the actual value), what CPU model your host has and what storage you are using.

EDIT: I can reproduce the issue here with the FreeBSD 14 installer and an attached SATA disk. Will investigate. Providing the VM configuration can still be useful for comparison.
 
Last edited:
Hi,
please provide the VM configuration for affected VMs qm config <ID> (replacing <ID> with the actual value), what CPU model your host has and what storage you are using.
Hi fiona,

VM Config:
balloon: 0
bios: ovmf
boot: order=sata0;net0;sata1
cores: 4
cpu: host
efidisk0: vmsDisk:vm-109-disk-1,efitype=4m,size=4M
hostpci0: 0000:0c:00,pcie=1
machine: q35
memory: 4096
meta: creation-qemu=8.0.2,ctime=1698911670
name: pfSense
net0: virtio=D6:08:F9:FD:C7:CD,bridge=vmbr0
numa: 0
onboot: 1
ostype: other
parent: Working
sata0: vmsDisk:vm-109-disk-0,size=30G,ssd=1
sata1: none,media=cdrom
scsihw: virtio-scsi-pci
smbios1: uuid=a5a42754-1b2c-44cd-8b47-5267ae7d5efb
sockets: 1
startup: order=1,up=10
vmgenid: 7459ee9e-ae5e-49d6-8261-9e7f7a0f7d53

CPU: Intel(R) Xeon(R) CPU E3-1240L v5
Storage: LVM-Thin

I also tested it on my second server and it happens there also. It is setup (only local storage exists) in a similar way apart from the CPU.
 
Hi fiona,

VM Config:
balloon: 0
bios: ovmf
boot: order=sata0;net0;sata1
cores: 4
cpu: host
efidisk0: vmsDisk:vm-109-disk-1,efitype=4m,size=4M
hostpci0: 0000:0c:00,pcie=1
machine: q35
memory: 4096
meta: creation-qemu=8.0.2,ctime=1698911670
name: pfSense
net0: virtio=D6:08:F9:FD:C7:CD,bridge=vmbr0
numa: 0
onboot: 1
ostype: other
parent: Working
sata0: vmsDisk:vm-109-disk-0,size=30G,ssd=1
sata1: none,media=cdrom
scsihw: virtio-scsi-pci
smbios1: uuid=a5a42754-1b2c-44cd-8b47-5267ae7d5efb
sockets: 1
startup: order=1,up=10
vmgenid: 7459ee9e-ae5e-49d6-8261-9e7f7a0f7d53

CPU: Intel(R) Xeon(R) CPU E3-1240L v5
Storage: LVM-Thin

I also tested it on my second server and it happens there also. It is setup (only local storage exists) in a similar way apart from the CPU.
Thanks! Could you try if attaching the disk as something other than SATA works around the issue? Don't forget to update the Boot Order in the VM options.
 
Thanks! Could you try if attaching the disk as something other than SATA works around the issue? Don't forget to update the Boot Order in the VM options.
I already tried that. Nothing worked when changing the settings.

Things I tried:
Change CPU model
Change Ram
Change Bios
Change Display to serial
Change from Q35
Change the SCSI Controller
Remove the CD drive
Change the type of network adapter
Boot without PCIe
 
I already tried that. Nothing worked when changing the settings.
Are you sure? Do you get the very same error or something else? Did you try IDE/SCSI/VirtIO block?

My reproducer only runs into the issue when there is a SATA disk and I already found a fix on qemu-devel now: https://lists.nongnu.org/archive/html/qemu-devel/2023-11/msg02277.html
After applying that fix, the Root mount waiting for: CAM message is gone and it boots fine.

We'll do some sanity testing and if everything goes well, the fix will be available in pve-qemu-kvm >= 8.1.2-3.
 
  • Like
Reactions: Neobin
Are you sure? Do you get the very same error or something else? Did you try IDE/SCSI/VirtIO block?

My reproducer only runs into the issue when there is a SATA disk and I already found a fix on qemu-devel now: https://lists.nongnu.org/archive/html/qemu-devel/2023-11/msg02277.html
After applying that fix, the Root mount waiting for: CAM message is gone and it boots fine.

We'll do some sanity testing and if everything goes well, the fix will be available in pve-qemu-kvm >= 8.1.2-3.
Hi!

Yes I am running into this exact problem. I even tried to boot only the CD without any disk attached at all.

My test vm config:
balloon: 0
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 4
cpu: host
efidisk0: local-lvm:vm-102-disk-0,efitype=4m,size=4M
ide2: local:iso/pfSense-CE-2.7.0-RELEASE-amd64.iso,media=cdrom,size=747284K
machine: q35
memory: 4096
meta: creation-qemu=8.1.2,ctime=1700472501
name: pfSense
net0: virtio=BC:24:11:87:F7:5E,bridge=vmbr0
numa: 0
ostype: other
scsi0: local-lvm:vm-102-disk-1,iothread=1,size=30G
scsihw: virtio-scsi-single
smbios1: uuid=32e6c5c6-f20f-495d-8ca6-8f3cb60917a6
sockets: 1
vmgenid: 0121285c-2e9a-4be9-9b68-a69af41961b1

Boot results in:
1700472713356.png

And stops at:
1700472733751.png
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!