High load on opnsense vm after upgrade to pve 9

aklausing

New Member
Jun 13, 2024
2
1
3
HI,

I just updated my environment to pve 9
- 2 node cluster with no shared storage
- both host use the same hardware (Ryzen 5 5600G, same mb, same Memory (64Gib))
- some vms get replicated for HA

Now I have some weird issues with my virtual opnsense vm (FreeBSD) that was running fine before the upgrade. Wehen I start the VM on one node the memory usage goes to over 99% after a very short time and stays there. The management interface of the firewall reports around 23% usage for memory.

Bildschirmfoto 2025-08-08 um 11.50.49.png

Bildschirmfoto 2025-08-08 um 11.53.11.png

top inside the opnsense vm also reports around 23-25% mem usage.

When I migrate the vm with HA to the other node also the CPU raises to over 100% (the peak was 126%) but only on the pve side. Inside the vm the cpu usage seems absolutely normal but all KVM processes on the host consume a lot of cpu.

Updating the opnsense to 25.7 did not resolve the problem :(

This is the vm config:

Code:
agent: 1
balloon: 0
boot: order=sata0;sata1;net0
cores: 2
cpu: x86-64-v3
cpuunits: 200
machine: q35
memory: 8192
meta: creation-qemu=8.0.2,ctime=1690449616
name: vaultdoor
net0: virtio=22:E6:9F:,bridge=vmbr0
net1: virtio=9E:8E:A1,bridge=vmbr101
net2: virtio=66:C4:B9:,bridge=vmbr102
net3: virtio=56:38:97,bridge=vmbr0,tag=234
net4: virtio=7A:1F:12:,bridge=vmbr0,tag=122
numa: 0
onboot: 1
ostype: other
parent: Update_25_7
protection: 1
sata0: zfs-nvme:vm-666-disk-0,discard=on,size=100G
sata1: none,media=cdrom
scsihw: virtio-scsi-single
smbios1: uuid=64af0c63-
startup: order=1
tags: dns;firewall;network;nvme
vmgenid: 8560fd4a-

[Update_25_7]
agent: 1
balloon: 0
boot: order=sata0;sata1;net0
cores: 2
cpu: x86-64-v3
cpuunits: 200
machine: q35
memory: 8192
meta: creation-qemu=8.0.2,ctime=1690449616
name: vaultdoor
net0: virtio=22:E6:9F:,bridge=vmbr0
net1: virtio=9E:8E:A1:,bridge=vmbr101
net2: virtio=66:C4:B9:,bridge=vmbr102
net3: virtio=56:38:97:,bridge=vmbr0,tag=234
net4: virtio=7A:1F:12:,bridge=vmbr0,tag=122
numa: 0
onboot: 1
ostype: other
protection: 1
sata0: zfs-nvme:vm-666-disk-0,discard=on,size=100G
sata1: none,media=cdrom
scsihw: virtio-scsi-single
smbios1: uuid=64af0c63-
snaptime: 1754636475
sockets: 1
startup: order=1
tags: dns;firewall;network;nvme
vmgenid: 8560fd4a-2858-

This is the only vm showing this behaviour after the upgrade. All other vms run as usual without any issues.

Any suggestions?

Thanks in advance
Andreas
 
Last edited:
  • Like
Reactions: repe
I'm dealing with a similar issue, but I did a clean install of 9 rather than an upgrade. The memory reporting is confusing, but doesn't seem to be an issue. The CPU becomes an issue because the VM stops responding eventually. I haven't been able to track down the root cause yet, but when it happens the CPU is running at 70-80% utilization and I can't access anything on the network anymore. I'm unable to get to the OPNSense management interface and all traffic comes to a standstill.

If you have any ideas how to diagnose, I'm all ears. The OPNSense logs don't show any issues after a reboot so I'm not sure where else to start digging.
 
I'm also running multiple opnsense instances on Proxmox and did not notice any CPU increase since PVE9

@aklausing you should install `os-qemu-guest-agent` plugin in opnsense and enable `Qemu Guest Agent` in Proxmox VM Option s. Then you get the real memory usage on Proxmox.
 
Last edited:
Hello, I too am experiencing the same thing. My RAM is always at 100% on the proxmox side and when I look on my opnsense machine, it is only at 30% exactly the same problems, I also specify that my CPU is in Hosts and that I do not use ballooning for the RAM. The qemu agent is installed and functional. On pve 8 I did not have this problem I am in enterprise version on pve and Opnsense
 
I've upgraded to PVE9 and have no performance issues, just incorrect reporting of RAM usage at 100% with my OPNsense VM. I'm aware of the advice provided in the PVE8 --> PVE9 documentation, but I wanted to check if the QEMU guest agent is supposed to resolve the reporting issue and help Proxmox display more accurate RAM usage?

I've always had the guest agent installed and running as the plugin described below, and I can see the difference with it running vs stopped (such as the guest's IP info), but both memory usage and host memory usage of the VM in the Proxmox web GUI is still displayed at 100%.

@H4R0 is your OPNsense VM actually reporting accurate RAM usage in the Proxmox webgui? If so, something must be broken with my setup, although I have tried re-installing the plugin to no avail. Ballooning is enabled if that makes a difference here.

I'm also running multiple opnsense instances on Proxmox and did not notice any CPU increase since PVE9

@aklausing you should install `os-qemu-guest-agent` plugin in opnsense and enable `Qemu Guest Agent` in Proxmox VM Option s. Then you get the real memory usage on Proxmox.