readyspace's latest activity

  • readyspace
    Hi, sounds like a tagging mismatch between Proxmox and OPNSense. You need to do bonding first — create LACP bond (mode 802.3ad) → bridge it to VLAN-aware vmbr → connect OPNSense VM NIC (no VLAN tag). Then do VLANs (10/20/30/40) inside OPNSense...
  • readyspace
    readyspace replied to the thread network speed problems.
    Sounds like you’re bumping into a VM network stack bottleneck (bridge + single-queue) rather than a cable limit — have you tried enabling virtio multiqueue + jumbo frames (MTU 9000) end-to-end and pinning IRQs/queues to vCPUs?
  • readyspace
    Looks like you’ve figured out part of it Ceph’s public network must be reachable by all Ceph clients and monitors — that includes: - all Ceph mon and mgr daemons, and any OSDs or VM hosts accessing RBD/CephFS. If it’s on a VLAN with no...
  • readyspace
    Great — now for VLANs, try enabling VLAN aware on vmbr0, don’t put your host IP on vmbr0 itself but on a VLAN sub-interface (e.g. vmbr0.10), and make sure your uplink switch is trunking those VLAN tags. See Proxmox docs & examples here...
  • readyspace
    From the screenshots we can see a few key points: All of your A16 GPU VFs (Virtual Functions) are still bound to the nvidia driver, not vfio-pci. That’s why your VM start command shows: kvm: error getting device from group 221: No such device...
  • readyspace
    Thank It's all clear now. Thanks.
  • readyspace
    readyspace replied to the thread IP over USB?.
    I dont think it will work. The error message you saw: UDC core: g_ether: couldn’t find an available UDC is telling: it means the USB Device Controller (UDC) subsystem did not find any eligible hardware endpoint in “device / gadget” mode to...
  • readyspace
    Hi, I usually dont do this, i will only backup all the VMs, then format the node and install latest PVE. After that import the VMs back into the PVE. This way is cleaner.
  • readyspace
    For full-node rollback, a disk image (Clonezilla, etc.) is the quickest path.
  • readyspace
    Hi, it looks like your Virtual Function (VF) for vGPU passthrough is not properly bound to VFIO at the time the VM starts. Your VF (0000:8c:02.2) belongs to IOMMU group 221, but that group is not attached to VFIO yet. Run: lspci -nnk | grep -A...
  • readyspace
    Hi, looks like something happened during your upgrade between 8 to 9. Usually I'll do move out the VMs, then do a totally fresh install of PVE for major upgrade due to changes to the kernel.
  • readyspace
    If i'm not wrong, it is possible to install Veeam agent in PVE and backup the whole host. Anyone tried it?
  • readyspace
    Hi, in numa0/numa1, the cpus= list refers to guest vCPU indexes (0…vCPUs-1), not host CPU IDs. Affinity is the host cpuset for the whole VM process (all vCPU threads), not per-vCPU. Proxmox VE (QEMU) doesn’t expose per-vCPU pinning in the VM...
  • readyspace
    Yes—this is expected with the PG autoscaler. With a mostly empty pool, Ceph starts low (often 32 PGs) and grows PGs as data increases unless you guide it with bulk, target_size_ratio/bytes, or a pg_num_min.
  • readyspace
    Hi @Chirath Perera , you might want to report this to Veeam. See what they say.
  • readyspace
    Are you using PVE 9? If so, Veeam’s restore paths do not yet fully support Proxmox 9
  • readyspace
    Good find — that explains it. If you still get a response with the fiber unplugged, the link was never actually active. Check that your SFP+ modules or DAC cables are compatible with the HP 546SFP+ (Mellanox ConnectX-3 Pro), and run ethtool...
  • readyspace
    Hi, please share more information as following if possible: Host network config: post the relevant parts of /etc/network/interfaces (or whatever file you use) showing eno1, the bridge (vmbr), and how they’re connected. VM NIC info: inside the...
  • readyspace
    Hi @logan893 , yes you are right. the older card doesnt have the necessary drivers for what you are looking for.
  • readyspace
    Thanks for sharing your config — you’re almost there. A few quick points: Keep only one bridge (vmbr0) — you already made it VLAN-aware, so no need to create vmbr0.110 unless the Proxmox host itself needs an IP inside that VLAN. For your VMs...