Recent content by VictorSTS

  1. VictorSTS

    backup stop mode doesn't stop the VM

    You need to understand how backup works. Specifically in stop mode: VM is running PVE tries to do a shutdown of the VM: If QEMU Agent is configured and running, shutdown signal is issued with it. Else, sends an ACPI signal which guest OS may or may not honor. It waits up to 10 minutes for the...
  2. VictorSTS

    PBS Verify duration on large HDD-based datastore — how to tune settings?

    I mean a "RAID0 of two RAIDz2 vdevs". Something like this: zpool create tank \ raidz2 disk1 disk2 disk3 disk4 disk5 disk6 disk7 \ raidz2 disk8 disk9 disk10 disk11 disk12 disk13 disk14 That's true, as I mentioned above: That happens because ZFS can only issue iops at vdev level...
  3. VictorSTS

    PBS Verify duration on large HDD-based datastore — how to tune settings?

    Good point!! I did assume OP had at least a mirror of NVMe for special device. Should be a mirror of 3 to give it the same redundancy as the HDD part. None of those settings will really help to increase performance on your big zpool on verify, maybe on sync if a bigger ARC allows some...
  4. VictorSTS

    PBS Verify duration on large HDD-based datastore — how to tune settings?

    Find out yourself with your hardware: apt install sysstat Start two terminals: iostat -dx 2 top -H Increase readers until the load on iostat for every disk is ~90% (or some lower limit to leave headroom for other activities on the same zpool). Increase workers by one if you see all default 4...
  5. VictorSTS

    Proxmox update from 8.4 to 9

    You must upgrade first to latest 8.4 (think it's 8.14.17 at time of writing) and Ceph 19.2.3 (Ceph Reef isn't supported in PVE9). It's clearly stated in the upgrade docs [1]. The Ceph upgrade docs are here [2]. Once you are in latest 8.4.x, and before upgrading to PVE9, I suggest you pin the...
  6. VictorSTS

    Ceph rbd du shows usage 2-4x higher than inside VM

    Yes, can be run on a live VM in the sense that the command runs, i.e. doesn't check if the given rbd image as an owner/lock, so we can suppose it's safe. As you mention, Ceph docs doesn't explicitly tell if it can be run live or not. That said, I use it from time to time on labs and training...
  7. VictorSTS

    Ceph rbd du shows usage 2-4x higher than inside VM

    Problem could be that your "trim" stopped working at some point for some reason (i.e. discard wasn't ticked in the VMs disk configuration) and even if fstrim tries to discard the whole free space, the underlying storage stack will only act on will discard "new free space since the last discard"...
  8. VictorSTS

    PVE 9.1.1 Memory bug ? + Windows Server 2025 installation experience sharing

    For Windows to properly report used memory to PVE you need: - Virtio Balloon driver and service installed and running (installed and enabled automatically with the VirtIO ISO installer). Don't confuse this with QEMU Agent, which does different things like VSS integration. - Enable balloon on VM...
  9. VictorSTS

    Hook script not executed on source host of migrating VM

    IIRC, unfortunately pre/post migration hook scripts aren't implemented yet [1]. The source PVE will have no clue the VM is no longer running in it. [1] https://bugzilla.proxmox.com/show_bug.cgi?id=1996
  10. VictorSTS

    Ceph squid OSD crash related to RocksDB ceph_assert(cut_off == p->length)

    Thanks for the heads up. Pretty sure most where created with Ceph Reef except a few that got recreated recently with Squid 19.2.3. I'm aware of that bug, but given that I don't use EC pools (Ceph bugreport mentions it seems to only happen on OSD that hold EC pools) never really payed attention...
  11. VictorSTS

    Ceph squid OSD crash related to RocksDB ceph_assert(cut_off == p->length)

    @fstrankowski I'm fully aware about the risks of OSD being full and know how to deal with that, but in any case an OSD should break because of that ;) Definitely fragmentation has an impact on this and will watch it more closely from now on. Anyway, I'm expecting new servers for this cluster...
  12. VictorSTS

    Ceph squid OSD crash related to RocksDB ceph_assert(cut_off == p->length)

    In short: if Ceph warns you about something, do something about it. Read the full bugreport and found this comment [1]: "This issue seems to mostly affect disks which were heavily fragmented.". Mine are and in fact I have some warnings related to this, although the webUI doesn't show them...
  13. VictorSTS

    Ceph squid OSD crash related to RocksDB ceph_assert(cut_off == p->length)

    PVE8.4.14 + Ceph 19.2.3, 3 node cluster. All disks are PCIe NVMe. Different pools, some with zstd compression enabled. I'm seeing OSD crashing lately with the same failure. Journal shows that it is unable to properly run RocksDB with an assert message. There are a few entries like these every...
  14. VictorSTS

    I want nothing more than encrypted push sync

    @Zappes please, link the Bugzilla you mention here [1]. I would love to be able pull (not push) encrypted syncs where the source is unencrypted for any reason and destination PBS must store encrypted backups. [1]...
  15. VictorSTS

    Questions about PBS local sync

    Say I have a namespace called "PVE" where PVE cluster stores it's backups. In the same datastore, I have another namespace called "DELETED". When a VM is deleted from the PVE cluster, I move it's backups from namespace "PVE" to "DELETED" in order to keep them there for some time. To make that...