Hi, you’re likely hitting a routing and proxy ARP issue caused by multiple gateways and Hetzner’s MAC-bound IP setup.
At Hetzner, each additional IP or subnet must be assigned to a unique virtual MAC and attached to the VM NIC — you can’t just...
Make sure your router → Proxmox host → LXC container network path is open. The usual blockers are service binding, firewalls, or missing NAT rules on the Proxmox host.
Ask them to check that the game server is listening on the IP, that...
Hi, sounds like a tagging mismatch between Proxmox and OPNSense.
You need to do bonding first — create LACP bond (mode 802.3ad) → bridge it to VLAN-aware vmbr → connect OPNSense VM NIC (no VLAN tag). Then do VLANs (10/20/30/40) inside OPNSense...
Sounds like you’re bumping into a VM network stack bottleneck (bridge + single-queue) rather than a cable limit — have you tried enabling virtio multiqueue + jumbo frames (MTU 9000) end-to-end and pinning IRQs/queues to vCPUs?
Looks like you’ve figured out part of it
Ceph’s public network must be reachable by all Ceph clients and monitors — that includes:
- all Ceph mon and mgr daemons, and any OSDs or VM hosts accessing RBD/CephFS.
If it’s on a VLAN with no...
Great — now for VLANs, try enabling VLAN aware on vmbr0, don’t put your host IP on vmbr0 itself but on a VLAN sub-interface (e.g. vmbr0.10), and make sure your uplink switch is trunking those VLAN tags.
See Proxmox docs & examples here...
From the screenshots we can see a few key points:
All of your A16 GPU VFs (Virtual Functions) are still bound to the nvidia driver, not vfio-pci.
That’s why your VM start command shows:
kvm: error getting device from group 221: No such device...
I dont think it will work. The error message you saw:
UDC core: g_ether: couldn’t find an available UDC
is telling: it means the USB Device Controller (UDC) subsystem did not find any eligible hardware endpoint in “device / gadget” mode to...
Hi, I usually dont do this, i will only backup all the VMs, then format the node and install latest PVE. After that import the VMs back into the PVE. This way is cleaner.
Hi, it looks like your Virtual Function (VF) for vGPU passthrough is not properly bound to VFIO at the time the VM starts.
Your VF (0000:8c:02.2) belongs to IOMMU group 221, but that group is not attached to VFIO yet.
Run:
lspci -nnk | grep -A...
Hi, looks like something happened during your upgrade between 8 to 9. Usually I'll do move out the VMs, then do a totally fresh install of PVE for major upgrade due to changes to the kernel.
Hi, in numa0/numa1, the cpus= list refers to guest vCPU indexes (0…vCPUs-1), not host CPU IDs.
Affinity is the host cpuset for the whole VM process (all vCPU threads), not per-vCPU.
Proxmox VE (QEMU) doesn’t expose per-vCPU pinning in the VM...
Yes—this is expected with the PG autoscaler. With a mostly empty pool, Ceph starts low (often 32 PGs) and grows PGs as data increases unless you guide it with bulk, target_size_ratio/bytes, or a pg_num_min.
Good find — that explains it. If you still get a response with the fiber unplugged, the link was never actually active. Check that your SFP+ modules or DAC cables are compatible with the HP 546SFP+ (Mellanox ConnectX-3 Pro), and run ethtool...