Sluggish webui after upgrade to PVE 9

pakyrs

Active Member
Jan 12, 2020
20
0
41
45
Hi everyone,

I'm experiencing severe performance issues with my Proxmox VE web interface and need some guidance. Here's my situation:
  • Running latest PVE version with all packages up to date
  • Recently upgraded from PVE 8 to PVE 9
Bash:
pveversion
pve-manager/9.0.6/49c767b70aeb6648 (running kernel: 6.14.8-2-pve)

The web UI has become extremely sluggish - loading spinners persist indefinitely and noVNC console access is practically unusable, no graphs for minutes. Initially suspected hardware or network issues, but:
  • Replaced boot SSD with brand-new drive
  • smartctl shows no errors before or after
  • iotop and htop show normal resource usage
  • Problem persists even with zero VMs or containers running
Troubleshooting Steps Taken:
  • Tested across multiple browsers and devices (including Proxmox mobile app) - same sluggish behaviour
  • SSH connections are slow to establish but work fine once connected
  • Restarted services: pveproxy, pvestatd, pvedaemon
  • Performed full system reboot
  • Reinstalled pve-manager and proxmox-widget-toolkit
  • Checked logs - nothing significant except "connection timeout" errors when attempting noVNC access:
Bash:
journalctl -u pvedaemon -f
Aug 28 12:14:40 nibbler pvedaemon[2186]: starting 3 worker(s)
Aug 28 12:14:40 nibbler pvedaemon[2186]: worker 2187 started
Aug 28 12:14:40 nibbler pvedaemon[2186]: worker 2188 started
Aug 28 12:14:40 nibbler pvedaemon[2186]: worker 2189 started
Aug 28 12:14:40 nibbler systemd[1]: Started pvedaemon.service - PVE API Daemon.
Aug 28 12:15:07 nibbler pvedaemon[2188]: <root@pam> successful auth for user 'root@pam'
Aug 28 12:15:33 nibbler pvedaemon[4900]: starting lxc termproxy UPID:nibbler:00001324:000019E0:68B03A55:vncproxy:121:root@pam:
Aug 28 12:15:33 nibbler pvedaemon[2188]: <root@pam> starting task UPID:nibbler:00001324:000019E0:68B03A55:vncproxy:121:root@pam:
Aug 28 12:15:44 nibbler pvedaemon[4900]: command '/usr/bin/termproxy 5900 --path /vms/121 --perm VM.Console -- /usr/bin/dtach -A /var/run/dtach/vzctlconsole121 -r winch -z lxc-console -n 121 -e -1' failed: exit code 1
Aug 28 12:15:44 nibbler pvedaemon[2188]: <root@pam> end task UPID:nibbler:00001324:000019E0:68B03A55:vncproxy:121:root@pam: command '/usr/bin/termproxy 5900 --path /vms/121 --perm VM.Console -- /usr/bin/dtach -A /var/run/dtach/vzctlconsole121 -r winch -z lxc-console -n 121 -e -1' failed: exit code 1
debug3: obfuscate_keystroke_timing: stopping: chaff time expired (595 chaff packets sent)
Aug 28 12:18:24 nibbler pvedaemon[2189]: <root@pam> successful auth for user 'checkmk@pve'
Aug 28 12:18:41 nibbler pvedaemon[17899]: starting vnc proxy UPID:nibbler:000045EB:0000634F:68B03B11:vncproxy:103:root@pam:
Aug 28 12:18:41 nibbler pvedaemon[2188]: <root@pam> starting task UPID:nibbler:000045EB:0000634F:68B03B11:vncproxy:103:root@pam:
Aug 28 12:18:51 nibbler pvedaemon[17899]: connection timed out
Aug 28 12:18:51 nibbler pvedaemon[2188]: <root@pam> end task UPID:nibbler:000045EB:0000634F:68B03B11:vncproxy:103:root@pam: connection timed out
Aug 28 12:19:22 nibbler pvedaemon[2189]: <root@pam> successful auth for user 'checkmk@pve'

Any idea or pointers?
 
Hmm, the web interface itself runs in your local browser. But if you mean that it loads slowly whenever it fetches data from the server, then there could be a few things.
The kernel panic doesn't look too good.
This could be a hardware problem.

Try to update the BIOS/firmware of the motherboard.

If you haven't done so already, install the microcode package for your CPU vendor: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_firmware_cpu

You could also run a full memory test to make sure it isn't faulty. The Proxmox VE install ISOs come with memtest86. It is in the "Advanced" menu of the bootloader.
 
Thanks that is a good point and yes I was running a memory test, which did finish many hours later with no issues.

BIOS is on latest available

2025-08-29_07-35.png
 
I am keeping one of my containers off for now, and it seems to not kernel panic anymore. Not sure will have to see. Wonder if any of the special options creates problems in PVE9, I removed the cgroup non 2 ones

Bash:
arch: amd64
cores: 8
hostname: docker
memory: 8192
mp0: /SATA/archive,mp=/STORAGE/archive,backup=0
net0: name=eth0,bridge=vmbr0,gw=192.168.24.254,hwaddr=2E:71:27:93:4F:E0,ip=192.168.24.12/24,tag=101,type=veth
onboot: 0
ostype: debian
rootfs: NVME:subvol-105-disk-0,size=100G
startup: order=1,up=20
swap: 4096
tags: prod
lxc.apparmor.profile: unconfined
lxc.cap.drop:
lxc.mount.entry: /dev/net dev/net none bind,create=dir
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/dri dev/dri none bind,create=dir
lxc.cgroup2.devices.allow: c 188:* rwm
lxc.mount.entry: /dev/ttyUSB0 dev/ttyUSB0 none bind,optional,create=file
lxc.mount.entry: usb-Silicon_Labs_Sonoff_Zigbee_3.0_USB_Dongle_Plus_0001-if00-port0 dev/usb-Silicon_Labs_Sonoff_Zigbee_3.0_USB_Dongle_Plus_0001-if00-port0 none bind,optional,create=file

on the other end, the webui and ssh are still sluggish and the NoVNC is inaccessible with the connection timeout in the logs.
 
I think I solved the sluggishness, it was due to networking. I dunno what it is as I haven't changed my network config except for pinning the interface to nic0 with the pve-network-pinning tool.

In the end I wasted a good time trying to get the network config to work, I have a single nic VLAN aware and this works fine:

Code:
auto lo
iface lo inet loopback

iface nic0 inet manual

auto vmbr0
iface vmbr0 inet static
    bridge-ports nic0
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094

auto vmbr0.100
iface vmbr0.100 inet static
    address 10.24.254.31/24
    gateway 10.24.254.254

auto vmbr0.101
iface vmbr0.101 inet static

auto vmbr0.103
iface vmbr0.103 inet static

post-up /usr/sbin/ethtool -s nic0 wol g

To my findings the slugginess and pve not working was caused by the network setting, then the kernel panic was related to the container, something has happened in PVE9 that it dislikes maybe the many bind mounts, capabilities and usb passthrough I had to it. Been running the same lxc for 3 years with no hiccups.