Search results

  1. readyspace

    Cluster Issues

    HI, stale node entries or mismatched SSH keys can definitely cause cluster sync chaos. In addition, make sure the new node’s ring0_addr matches the existing subnet in /etc/pve/corosync.conf, and that /etc/hosts across all nodes correctly maps each node’s cluster IP. Any mismatch there will...
  2. readyspace

    [SOLVED] Configure vlan on CHR in proxmox

    Hi, you will need to setup using Proxmox bridges and VLAN tagging. Please try this... Create one bridge (e.g. vmbr0) for WAN (CHR ether1). Create another VLAN-aware bridge (e.g. vmbr1) for LAN and VLANs (CHR ether2). Attach VLAN interfaces (10, 20, etc.) to that bridge in CHR and set DHCP per...
  3. readyspace

    proxmox and 2 vm with outer ip: and different gw

    Hi, you’re likely hitting a routing and proxy ARP issue caused by multiple gateways and Hetzner’s MAC-bound IP setup. At Hetzner, each additional IP or subnet must be assigned to a unique virtual MAC and attached to the VM NIC — you can’t just add a second gateway or subnet to the Proxmox host...
  4. readyspace

    Having issue running a game server on lxc

    Make sure your router → Proxmox host → LXC container network path is open. The usual blockers are service binding, firewalls, or missing NAT rules on the Proxmox host. Ask them to check that the game server is listening on the IP, that container and host firewalls permit the port, and that the...
  5. readyspace

    Best GUI firewall for Proxmox home lab use

    If it’s mainly for monitoring your kids’ internet usage and you don’t want to deal with too much CLI — OPNsense is probably your best bet.
  6. readyspace

    [SOLVED] VLAN trunk with LACP in Proxmox (OPNsense virtualization)

    Hi, sounds like a tagging mismatch between Proxmox and OPNSense. You need to do bonding first — create LACP bond (mode 802.3ad) → bridge it to VLAN-aware vmbr → connect OPNSense VM NIC (no VLAN tag). Then do VLANs (10/20/30/40) inside OPNSense on that LAGG. You can look at this for...
  7. readyspace

    network speed problems

    Sounds like you’re bumping into a VM network stack bottleneck (bridge + single-queue) rather than a cable limit — have you tried enabling virtio multiqueue + jumbo frames (MTU 9000) end-to-end and pinning IRQs/queues to vCPUs?
  8. readyspace

    [SOLVED] Does Ceph Public need to be on ProxMox Cluster network?

    Looks like you’ve figured out part of it Ceph’s public network must be reachable by all Ceph clients and monitors — that includes: - all Ceph mon and mgr daemons, and any OSDs or VM hosts accessing RBD/CephFS. If it’s on a VLAN with no routing or MTU mismatch, they can’t exchange heartbeats...
  9. readyspace

    No webGUI after dnsmasq install

    Great — now for VLANs, try enabling VLAN aware on vmbr0, don’t put your host IP on vmbr0 itself but on a VLAN sub-interface (e.g. vmbr0.10), and make sure your uplink switch is trunking those VLAN tags. See Proxmox docs & examples here...
  10. readyspace

    The VPUG virtual machine of A16 cannot be started.

    From the screenshots we can see a few key points: All of your A16 GPU VFs (Virtual Functions) are still bound to the nvidia driver, not vfio-pci. That’s why your VM start command shows: kvm: error getting device from group 221: No such device Verify all devices in group 221 are bound to vfio or...
  11. readyspace

    IP over USB?

    I dont think it will work. The error message you saw: UDC core: g_ether: couldn’t find an available UDC is telling: it means the USB Device Controller (UDC) subsystem did not find any eligible hardware endpoint in “device / gadget” mode to drive the g_ether gadget. In short — the kernel is not...
  12. readyspace

    Proxmox hangs after upgrade to v9

    Hi, I usually dont do this, i will only backup all the VMs, then format the node and install latest PVE. After that import the VMs back into the PVE. This way is cleaner.
  13. readyspace

    Proxmox hangs after upgrade to v9

    For full-node rollback, a disk image (Clonezilla, etc.) is the quickest path.
  14. readyspace

    The VPUG virtual machine of A16 cannot be started.

    Hi, it looks like your Virtual Function (VF) for vGPU passthrough is not properly bound to VFIO at the time the VM starts. Your VF (0000:8c:02.2) belongs to IOMMU group 221, but that group is not attached to VFIO yet. Run: lspci -nnk | grep -A 3 8c:02.2 and see what driver is bound. If it’s...
  15. readyspace

    Proxmox hangs after upgrade to v9

    Hi, looks like something happened during your upgrade between 8 to 9. Usually I'll do move out the VMs, then do a totally fresh install of PVE for major upgrade due to changes to the kernel.
  16. readyspace

    Full Server backup + restore solution

    If i'm not wrong, it is possible to install Veeam agent in PVE and backup the whole host. Anyone tried it?
  17. readyspace

    Correct VM NUMA config on 2 sockets HOST

    Hi, in numa0/numa1, the cpus= list refers to guest vCPU indexes (0…vCPUs-1), not host CPU IDs. Affinity is the host cpuset for the whole VM process (all vCPU threads), not per-vCPU. Proxmox VE (QEMU) doesn’t expose per-vCPU pinning in the VM config, so you can’t directly “tie” guest vNUMA 0...
  18. readyspace

    Ceph Pool PGs Reduced to 32 by Autoscale Despite Setting 256 – Is This Normal?

    Yes—this is expected with the PG autoscaler. With a mostly empty pool, Ceph starts low (often 32 PGs) and grows PGs as data increases unless you guide it with bulk, target_size_ratio/bytes, or a pg_num_min.
  19. readyspace

    Failed to prepare disks for restore

    Hi @Chirath Perera , you might want to report this to Veeam. See what they say.
  20. readyspace

    Failed to prepare disks for restore

    Are you using PVE 9? If so, Veeam’s restore paths do not yet fully support Proxmox 9