Hi, think you need to do some adjustments
1. Change bond-mode from balance-rr to 802.3ad (LACP) so it matches your Cisco trunk.
2. On the switch, make sure VLANs 20, 30, 110 are allowed on that trunk.
3. Keep only vmbr0 and set VLAN tags per VM...
Can you try this?
Try installing on your Proxmox node:
apt install thermald acpid
systemctl enable --now thermald acpid
Reboot and check if /sys/class/thermal/ updates dynamically
See if it works after.
Here are just some of my own thoughts for your reference.
I would, stick with ext4 on the WD Blue HDD — ZFS adds extra RAM and CPU load with little gain on a single disk. Keep SSD for OS and active VMs, and use the HDD for logs, swap, backups...
Hi, based on your description, i'm guessing that your PVE01’s SDN state got out of sync with the cluster.
In this case, try removing cached configs under /var/lib/pve-sdn and restart pve-sdn to force regeneration. Also verify...
The error messages suggest that the guest driver can’t access the GPU’s memory space and is falling back to 0x0.
Please check and try... make sure your VM uses machine: q35 with pcie=1, and that “Above 4G Decoding” and “Resizable BAR” are...
Hi, this usually happens when Proxmox doesn’t load the right driver for your SFP NICs — common with Broadcom or Intel X710/XL710 adapters.
Check lspci -k | grep -A3 Eth to confirm which module is used (e.g., bnx2x, i40e, or ixgbe).
If none is...
Hi, this seems like a thermal driver issue with the PVE kernel — the fan control logic relies on ACPI/DPTF modules that aren’t loaded by default in Proxmox.
Ubuntu and Debian use a generic kernel that includes these power-management hooks, so...
hi, it looks like your SSD is handling both OS and VM I/O, which accelerates wear.
Check if TRIM is active (fstrim -v /), and enable discard on LVM/VM disks to free unused blocks.
Move logs, swap, and RRD data off the SSD or into tmpfs/ramdisk...
Based on what i see... this should work.
Set your main bridge (vmbr0) to VLAN-aware and keep Internet untagged.
Then create eth0.46 for your VoIP VLAN and bridge it as vmbr1.
Attach your OpenWRT VM NICs to vmbr0 and vmbr1 accordingly — no VLAN...
Looks like your host now boots into plain Debian — pveproxy and spiceproxy are missing, which means proxmox-ve and pve-manager were removed during the upgrade.
Re-enable the proper PVE repo in /etc/apt/sources.list.d/pve-enterprise.list (or...
Hi, it seems to be a Hetzner dedicated server networking issue — very common when users try to add multiple public IPv4 and IPv6 addresses.
Some things to note : Looks like you’re assigning the same public IP to both enp4s0 and vmbr1, which...
It seems like the upgrade flipped your host to Debian 13 without a valid PVE repo/key, so APT removed proxmox-ve and friends; then a GRUB 2.12 update + update-grub finished the brick.
You might want to try this - Boot the PVE installer in...
Running that command yields:
TARGET SOURCE FSTYPE OPTIONS
/mnt/data hddpool/data zfs rw,relatime,xattr,noacl,casesensitive
The container FS itself is very small, so there is no chance it contains the 1TB+ files stored in data. If I...
OK, from your latest screenshots, now i see that your setup is already running in vGPU mode — the nvidia-vgpu-vfio messages confirm that the vGPU Manager is active.
That means you shouldn’t bind those GPU addresses to vfio-pci; instead, create...
Hi — this is doable, but you’ll need to treat one interface as WAN and the other as LAN (or “direct”) and have OPNsense route between them (i.e. don’t bridge both blindly).
Give each interface its own bridge (vmbrX) on the Proxmox host, map them...
Hi, you’re correct — MTU in Proxmox SDN is currently zone-wide, not per-VNET.
This is a limitation of the SDN implementation and underlying OVS behavior.
Creating a second zone with a different MTU is indeed the proper workaround for now...
You should bind the GPU VFs to vfio-pci (don’t use pci-stub unless you really must). Example for 0000:8c:00.6 (ID 10de:25b6):
To bind :-
dev=0000:8c:00.6
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
echo $dev >...
This might be too technical here but...
That “visible in LXC, missing on host” almost always means the file was written into the container’s rootfs (the bind wasn’t active at that moment or you wrote to a different path), not the shared host...