Search results

  1. VictorSTS

    ZFS rpool isn't imported automatically after update from PVE9.0.10 to latest PVE9.1 and kernel 6.17 with viommu=virtio

    I though it would be related to nested virtualization / virtio vIOMMU. Haven't seen any issue with bare metal yet. Can you manually import the pool and continue boot once disks are detected (zpool import rpool)?
  2. VictorSTS

    Any news on lxc online migration?

    For reference, https://forum.proxmox.com/threads/proxmox-ve-8-4-released.164821/page-2#post-762577 CRIU doesn't seem to be powerful / mature enough to be used as an option and IMHO seems that Proxmox would have to devel a tool for live migrating an LXC, something that no one has done yet and...
  3. VictorSTS

    iSCSI multipath issue

    As mentioned previously, PVE will try to connect iSCSI disks later in the boot process than multipath expects them to be online, so multipath won't be able to use the disks. You can't use multipath with iSCSI disks managed/connected/configured by PVE. You must use iscsiadm and not connect them...
  4. VictorSTS

    PSA: PVE 9.X iSCSI/iscsiadm upgrade incompatibility

    IIUC, this may/will affect iSCSI deployments configured on PVE8.x when updating to PVE9.x, am I right? New deployments with PVE9.x should work correctly? Thanks!
  5. VictorSTS

    [PROXMOX CLUSTER] Add NFS resource to Proxmox from a NAS for backup

    Can't really recommend anything specific without infrastructure details, but I would definitely use some VPN and tunnel NFS traffic inside it, both for obvious security reasons and ease of management on the WAN side (you'll only need to expose the VPN service port to the internet). Now that you...
  6. VictorSTS

    ZFS rpool isn't imported automatically after update from PVE9.0.10 to latest PVE9.1 and kernel 6.17 with viommu=virtio

    On one of my training labs, I have a series of training VMs running PVE with nested virtualization. These VM has two disks in a ZFS mirror for the OS, UEFI, secure boot disabled, use systemd-boot (no grub). VM uses machine: q35,viommu=virtio for PCI passthrough explanation and webUI walkthrough...
  7. VictorSTS

    [PROXMOX CLUSTER] Add NFS resource to Proxmox from a NAS for backup

    It's unclear if you are using some VPN or direct connection using public IPs (hope not, NFS has no encryption), but maybe there's some firewall and/or NAT rule that doesn't allow RPC traffic properly? Maybe your synology uses some port range for RPC and those aren't the same on the FW?
  8. VictorSTS

    Abysmally slow restore from backup

    IIUC that patch seem to be applied to the verify tasks to improve it's performance. If that's the case, words can't express how eager am I to test it once patch gets packaged!
  9. VictorSTS

    redundant qdevice network

    This is wrong by design: you infrastructure devices must be 100% independent from your VMs. If your PVE hosts need to reach a remote QDevice, they must reach it on their own (i.e. run a wireguard tunnel on each PVE host). From the point of view of corosync/PVE, a QDevice is "just an IP". How you...
  10. VictorSTS

    Snapshots as Volume-Chain Creates Large Snapshot Volumes

    IMHO, a final delete/discard should be done, too. If no discard is sent, it would delegate to the SAN what to do with those zeroes and depending on SAN capabilities (mainly thin provision but also compression and deduplication) may not free the space from the SAN perspective. And yes, some SANs...
  11. VictorSTS

    PVE8to9 Prompts to Remove systemd-boot on ZFS & UEFI

    It's explicitly explained in the PVE 8 to 9 documentation [1]: that package has been split in two on Trixie, hence systemd-boot isn't needed. I've upgraded some clusters already and removed that package when pve8to9 suggested to and the boot up correctly from any of the ZFS boot disks. [1]...
  12. VictorSTS

    Bug in wipe volume operation

    I reported this already [1] and it is claimed to be fixed in PVE9.1 released today, although haven't tested it yet. [1] https://bugzilla.proxmox.com/show_bug.cgi?id=6941
  13. VictorSTS

    Proxmox with 48 nodes

    You need two corosync links. For 12 nodes on gigabit I would use dedicated links for both, just in case, even if having it just for Link0 would be enough. Max I've got in production with gigabit corosync is 8 hosts, no problems at all.
  14. VictorSTS

    Random freezes due to host CPU type

    Given the logs you posted, I would start by removing docker from that host (it's not officially supported) and not exposing critical services like ssh to the internet. You also mention "VNC", which makes me think maybe you installed PVE on top of Debian and might be using a desktop enviroment...
  15. VictorSTS

    Suggestions for low cost HA production setup in small company

    There is RSTP [1] Maybe, but it does allow to use both links simultaneously while on RTSP only one is in use and the other is fallback only. Which you should have anyway, connected to two switches with MLAG/stacking to avoid the network being an SPOF. But yes, you would need 4 nics per host...
  16. VictorSTS

    Suggestions for low cost HA production setup in small company

    If Ceph doesn't let you write is because some PG(s) don't have enough OSD to fulfill the size/min.size set on a pool. In a 3 host Ceph cluster, for that to happen you either have to: Lose 2 hosts: you won't have quorum neither on Ceph nor on PVE and your VMs won't work until at least one host...
  17. VictorSTS

    Proxmox Ceph Performance

    That data means little if you don't post the exact fio test you ran. AFAIR, the benchmark that Ceph does is a 4k write bench to find out the IOps capacity of the drive. You should bench that with fio. Also, I would run the same bench on a host/disk that seems to provide proper performance and...
  18. VictorSTS

    Proxmox Ceph Performance

    Tell Ceph to benchmark those drives again on OSD start and restart the service when appropriate: ceph config set osd osd_mclock_force_run_benchmark_on_init true There's also another ceph tell like command to run a benchmark right now, but I don't remember it and may also be affected by real...
  19. VictorSTS

    PVE is killing my WinServer2025 VMs

    Those PVE logs only show that PVE is removing the network interfaces related to VMID 101 and 106. Check event log inside the VM. I have some Win2025 test VMs running 24x7 both on PVE8.4 and PVE9 without such issue. Although doesn't seem to be your case, triple-check there's no OOM events with...