Recent content by VictorSTS

  1. VictorSTS

    Subscriptions and new hardware

    Can't see were I've been hostile in any way. I've been trying to propose you methods and alternatives that you didn't like and insist on doing things "the VMWware way". That's simply not how PVE works. PVE it's not a 1:1 replacement, but an alternative with it's pro's and con's. Accepting it's...
  2. VictorSTS

    Subscriptions and new hardware

    You can get that very same behavior on PVE, either with or without aditional subscriptions for the new hardware. I've already shown you how to do it without any extra cost. In fact, as Enterprise repo has slightly older packages than no-subscription, you could install the very same versions that...
  3. VictorSTS

    Subscriptions and new hardware

    The way I do it: - Src cluster has subscription. - Update the nodes to latest version. - Install new servers, configure network. - Move subscription to new nodes. - Install latest packages on new nodes. - Setup Ceph, storages, backups, users, etc (if on the same cluster most of this gets...
  4. VictorSTS

    Subscriptions and new hardware

    @niteshadow While technically you could, you are breaking two golden rules: Good practice dictate to have the same package versions on every node. Each server in your cluster needs its own subscription based on its specific socket count. All nodes within a cluster must be subscribed at the same...
  5. VictorSTS

    Proxmox/Ceph - Disable OSD rebalancing

    Seems you are mixing concepts here: the balancer MGR module doesn't do the recovery/backfill when an OSD goes IN/OUT, that is a core feature of Ceph managed by MONs and OSDs, not a MGR module. The balancer function is to spread PGs among all available OSDs and try to assign similar amount of PGs...
  6. VictorSTS

    Proxmox/Ceph - Disable OSD rebalancing

    Correct. PVE webUI and ceph status will report that an OSD is DOWN and the Ceph status will be "WARN" as soon as any OSD is DOWN. There will be no email alerts, though. You'll have to monitor Ceph somehow or at least configure MGR alerts module [1]. [1] https://docs.ceph.com/en/quincy/mgr/alerts/
  7. VictorSTS

    Proxmox/Ceph - Disable OSD rebalancing

    Because that's the minimum ratio of OSD's that will be kept IN no matter what, not the ratio that decides when to trigger a rebalance, which happens as soon as an OSD is marked OUT.
  8. VictorSTS

    Proxmox/Ceph - Disable OSD rebalancing

    In a 3 node cluster with size=3, min_size=2 and the default replica rule, Ceph won't rebalance anything if one node fails. To comply with the default rule "three replicas en three OSD located in three different servers" you need 3 servers, if there's only 2 your PG will stay undersized until the...
  9. VictorSTS

    File restore and symlinks

    Well, I didn't said it was solved, but checking the bug report 4995, "Improve file restore .zip format to include symlinks", seems to be solved with v4.0.19 of PBS [1]. That means that it should work when restoring from PBS. Doesn't mention it should work when restoring from PVE. In fact...
  10. VictorSTS

    backup stop mode doesn't stop the VM

    You need to understand how backup works. Specifically in stop mode: VM is running PVE tries to do a shutdown of the VM: If QEMU Agent is configured and running, shutdown signal is issued with it. Else, sends an ACPI signal which guest OS may or may not honor. It waits up to 10 minutes for the...
  11. VictorSTS

    PBS Verify duration on large HDD-based datastore — how to tune settings?

    I mean a "RAID0 of two RAIDz2 vdevs". Something like this: zpool create tank \ raidz2 disk1 disk2 disk3 disk4 disk5 disk6 disk7 \ raidz2 disk8 disk9 disk10 disk11 disk12 disk13 disk14 That's true, as I mentioned above: That happens because ZFS can only issue iops at vdev level...
  12. VictorSTS

    PBS Verify duration on large HDD-based datastore — how to tune settings?

    Good point!! I did assume OP had at least a mirror of NVMe for special device. Should be a mirror of 3 to give it the same redundancy as the HDD part. None of those settings will really help to increase performance on your big zpool on verify, maybe on sync if a bigger ARC allows some...
  13. VictorSTS

    PBS Verify duration on large HDD-based datastore — how to tune settings?

    Find out yourself with your hardware: apt install sysstat Start two terminals: iostat -dx 2 top -H Increase readers until the load on iostat for every disk is ~90% (or some lower limit to leave headroom for other activities on the same zpool). Increase workers by one if you see all default 4...
  14. VictorSTS

    Proxmox update from 8.4 to 9

    You must upgrade first to latest 8.4 (think it's 8.14.17 at time of writing) and Ceph 19.2.3 (Ceph Reef isn't supported in PVE9). It's clearly stated in the upgrade docs [1]. The Ceph upgrade docs are here [2]. Once you are in latest 8.4.x, and before upgrading to PVE9, I suggest you pin the...
  15. VictorSTS

    Ceph rbd du shows usage 2-4x higher than inside VM

    Yes, can be run on a live VM in the sense that the command runs, i.e. doesn't check if the given rbd image as an owner/lock, so we can suppose it's safe. As you mention, Ceph docs doesn't explicitly tell if it can be run live or not. That said, I use it from time to time on labs and training...