Search results

  1. VictorSTS

    Snapshot as volume chain for file level storage, use cases?

    I'm fully aware about the usefulness of snapshots as volume chains for LVM and I am about not needing it on any file based storage. That's not what I'm asking. My question is what is the use case and motivation to use snapshots as volume chains on file based storage when there are proven...
  2. VictorSTS

    change proxmox host ip post install cli

    That script doesn't change the IP on any of the needed files, just on the network configuration and doesn't really add anything to what you can do by hand or via webUI. Don't use it.
  3. VictorSTS

    change proxmox host ip post install cli

    Change the entry on /etc/hosts too and restart pve-proxy and pvecluster services (or reboot host). Details here [1]. Remember this works if host isn't in a cluster, which probable isn't as it's a single host. [1] https://pve.proxmox.com/wiki/Renaming_a_PVE_node
  4. VictorSTS

    [SOLVED] Hard Disk + Network missing after Upgrade of Machine Version

    Just stumbled on this. PVE9 with QEMU 10.1 deprecates VM machine versions older than 6 years [1]. You will have to change the machine version on the hardware of the VM to be >=6, both for 440fx or Q35. This implies that a new virtual motherboard will be used and guest OS will require...
  5. VictorSTS

    ZFS rpool isn't imported automatically after update from PVE9.0.10 to latest PVE9.1 and kernel 6.17 with viommu=virtio

    Definitely not the same issue, even if the symptom is the same. I remember having somewhat similar issue on some Dell long ago (AFAIR it was when PVE7.0 came out) and enabling all of X2APIC, IOMMU and SRIOV on BIOS + BIOS update solved it at the time.
  6. VictorSTS

    ZFS rpool isn't imported automatically after update from PVE9.0.10 to latest PVE9.1 and kernel 6.17 with viommu=virtio

    I though it would be related to nested virtualization / virtio vIOMMU. Haven't seen any issue with bare metal yet. Can you manually import the pool and continue boot once disks are detected (zpool import rpool)?
  7. VictorSTS

    Any news on lxc online migration?

    For reference, https://forum.proxmox.com/threads/proxmox-ve-8-4-released.164821/page-2#post-762577 CRIU doesn't seem to be powerful / mature enough to be used as an option and IMHO seems that Proxmox would have to devel a tool for live migrating an LXC, something that no one has done yet and...
  8. VictorSTS

    iSCSI multipath issue

    As mentioned previously, PVE will try to connect iSCSI disks later in the boot process than multipath expects them to be online, so multipath won't be able to use the disks. You can't use multipath with iSCSI disks managed/connected/configured by PVE. You must use iscsiadm and not connect them...
  9. VictorSTS

    PSA: PVE 9.X iSCSI/iscsiadm upgrade incompatibility

    IIUC, this may/will affect iSCSI deployments configured on PVE8.x when updating to PVE9.x, am I right? New deployments with PVE9.x should work correctly? Thanks!
  10. VictorSTS

    [PROXMOX CLUSTER] Add NFS resource to Proxmox from a NAS for backup

    Can't really recommend anything specific without infrastructure details, but I would definitely use some VPN and tunnel NFS traffic inside it, both for obvious security reasons and ease of management on the WAN side (you'll only need to expose the VPN service port to the internet). Now that you...
  11. VictorSTS

    ZFS rpool isn't imported automatically after update from PVE9.0.10 to latest PVE9.1 and kernel 6.17 with viommu=virtio

    On one of my training labs, I have a series of training VMs running PVE with nested virtualization. These VM has two disks in a ZFS mirror for the OS, UEFI, secure boot disabled, use systemd-boot (no grub). VM uses machine: q35,viommu=virtio for PCI passthrough explanation and webUI walkthrough...
  12. VictorSTS

    [PROXMOX CLUSTER] Add NFS resource to Proxmox from a NAS for backup

    It's unclear if you are using some VPN or direct connection using public IPs (hope not, NFS has no encryption), but maybe there's some firewall and/or NAT rule that doesn't allow RPC traffic properly? Maybe your synology uses some port range for RPC and those aren't the same on the FW?
  13. VictorSTS

    Abysmally slow restore from backup

    IIUC that patch seem to be applied to the verify tasks to improve it's performance. If that's the case, words can't express how eager am I to test it once patch gets packaged!
  14. VictorSTS

    redundant qdevice network

    This is wrong by design: you infrastructure devices must be 100% independent from your VMs. If your PVE hosts need to reach a remote QDevice, they must reach it on their own (i.e. run a wireguard tunnel on each PVE host). From the point of view of corosync/PVE, a QDevice is "just an IP". How you...
  15. VictorSTS

    Snapshots as Volume-Chain Creates Large Snapshot Volumes

    IMHO, a final delete/discard should be done, too. If no discard is sent, it would delegate to the SAN what to do with those zeroes and depending on SAN capabilities (mainly thin provision but also compression and deduplication) may not free the space from the SAN perspective. And yes, some SANs...
  16. VictorSTS

    PVE8to9 Prompts to Remove systemd-boot on ZFS & UEFI

    It's explicitly explained in the PVE 8 to 9 documentation [1]: that package has been split in two on Trixie, hence systemd-boot isn't needed. I've upgraded some clusters already and removed that package when pve8to9 suggested to and the boot up correctly from any of the ZFS boot disks. [1]...
  17. VictorSTS

    Bug in wipe volume operation

    I reported this already [1] and it is claimed to be fixed in PVE9.1 released today, although haven't tested it yet. [1] https://bugzilla.proxmox.com/show_bug.cgi?id=6941
  18. VictorSTS

    Proxmox with 48 nodes

    You need two corosync links. For 12 nodes on gigabit I would use dedicated links for both, just in case, even if having it just for Link0 would be enough. Max I've got in production with gigabit corosync is 8 hosts, no problems at all.
  19. VictorSTS

    Random freezes due to host CPU type

    Given the logs you posted, I would start by removing docker from that host (it's not officially supported) and not exposing critical services like ssh to the internet. You also mention "VNC", which makes me think maybe you installed PVE on top of Debian and might be using a desktop enviroment...