Recent content by VictorSTS

  1. VictorSTS

    Ceph: High Latency on NVMe SSD

    16 consumer NVMe drives. Any write Ceph does is sync and any drive without PLP will show high latency and, once its cache fills, poor sequential performance. Keep in mind that you have to write to 3 disks and besides data itself it has to write to the RocksDB of each OSD to keep track of the...
  2. VictorSTS

    Previous chunks don't seem to be used for new backups

    After a local sync from the namespace where PBS02 stored the synced snapshots to the namespace where PVE stores it's backups, now PVE only transfers changed data and in most cases even uses the existing dirty-map (I suspect dirty-map can be reused if a backup to the "new" encrypted namespace was...
  3. VictorSTS

    Previous chunks don't seem to be used for new backups

    This is what I was missing here. Yes, they are in a new, empty namespace. Now that I think it again makes sense as there is no "list of snapshots with their list of chunks" to compare to and help PBS decide before hand if a chunk should be transferred or not. Will do a local sync to the new...
  4. VictorSTS

    Previous chunks don't seem to be used for new backups

    I've had some bad luck this time with a broken PBS server. This is the sequence of events: - Cluster PVE does it's backups to PBS01 (v3.4.4) for nearly two years. - PBS02 (v4.1.1) in a remote location syncs backups from PBS01. This has been working for like a year. It's on version 4.1.1 for at...
  5. VictorSTS

    Snapshot as volume chain for file level storage, use cases?

    I'm fully aware about the usefulness of snapshots as volume chains for LVM and I am about not needing it on any file based storage. That's not what I'm asking. My question is what is the use case and motivation to use snapshots as volume chains on file based storage when there are proven...
  6. VictorSTS

    change proxmox host ip post install cli

    That script doesn't change the IP on any of the needed files, just on the network configuration and doesn't really add anything to what you can do by hand or via webUI. Don't use it.
  7. VictorSTS

    change proxmox host ip post install cli

    Change the entry on /etc/hosts too and restart pve-proxy and pvecluster services (or reboot host). Details here [1]. Remember this works if host isn't in a cluster, which probable isn't as it's a single host. [1] https://pve.proxmox.com/wiki/Renaming_a_PVE_node
  8. VictorSTS

    [SOLVED] Hard Disk + Network missing after Upgrade of Machine Version

    Just stumbled on this. PVE9 with QEMU 10.1 deprecates VM machine versions older than 6 years [1]. You will have to change the machine version on the hardware of the VM to be >=6, both for 440fx or Q35. This implies that a new virtual motherboard will be used and guest OS will require...
  9. VictorSTS

    ZFS rpool isn't imported automatically after update from PVE9.0.10 to latest PVE9.1 and kernel 6.17 with viommu=virtio

    Definitely not the same issue, even if the symptom is the same. I remember having somewhat similar issue on some Dell long ago (AFAIR it was when PVE7.0 came out) and enabling all of X2APIC, IOMMU and SRIOV on BIOS + BIOS update solved it at the time.
  10. VictorSTS

    ZFS rpool isn't imported automatically after update from PVE9.0.10 to latest PVE9.1 and kernel 6.17 with viommu=virtio

    I though it would be related to nested virtualization / virtio vIOMMU. Haven't seen any issue with bare metal yet. Can you manually import the pool and continue boot once disks are detected (zpool import rpool)?
  11. VictorSTS

    Any news on lxc online migration?

    For reference, https://forum.proxmox.com/threads/proxmox-ve-8-4-released.164821/page-2#post-762577 CRIU doesn't seem to be powerful / mature enough to be used as an option and IMHO seems that Proxmox would have to devel a tool for live migrating an LXC, something that no one has done yet and...
  12. VictorSTS

    iSCSI multipath issue

    As mentioned previously, PVE will try to connect iSCSI disks later in the boot process than multipath expects them to be online, so multipath won't be able to use the disks. You can't use multipath with iSCSI disks managed/connected/configured by PVE. You must use iscsiadm and not connect them...
  13. VictorSTS

    PSA: PVE 9.X iSCSI/iscsiadm upgrade incompatibility

    IIUC, this may/will affect iSCSI deployments configured on PVE8.x when updating to PVE9.x, am I right? New deployments with PVE9.x should work correctly? Thanks!
  14. VictorSTS

    [PROXMOX CLUSTER] Add NFS resource to Proxmox from a NAS for backup

    Can't really recommend anything specific without infrastructure details, but I would definitely use some VPN and tunnel NFS traffic inside it, both for obvious security reasons and ease of management on the WAN side (you'll only need to expose the VPN service port to the internet). Now that you...
  15. VictorSTS

    ZFS rpool isn't imported automatically after update from PVE9.0.10 to latest PVE9.1 and kernel 6.17 with viommu=virtio

    On one of my training labs, I have a series of training VMs running PVE with nested virtualization. These VM has two disks in a ZFS mirror for the OS, UEFI, secure boot disabled, use systemd-boot (no grub). VM uses machine: q35,viommu=virtio for PCI passthrough explanation and webUI walkthrough...