VictorSTS's latest activity

  • VictorSTS
    VictorSTS replied to the thread Subscriptions and new hardware.
    Can't see were I've been hostile in any way. I've been trying to propose you methods and alternatives that you didn't like and insist on doing things "the VMWware way". That's simply not how PVE works. PVE it's not a 1:1 replacement, but an...
  • VictorSTS
    VictorSTS replied to the thread Subscriptions and new hardware.
    You can get that very same behavior on PVE, either with or without aditional subscriptions for the new hardware. I've already shown you how to do it without any extra cost. In fact, as Enterprise repo has slightly older packages than...
  • VictorSTS
    VictorSTS replied to the thread Subscriptions and new hardware.
    The way I do it: - Src cluster has subscription. - Update the nodes to latest version. - Install new servers, configure network. - Move subscription to new nodes. - Install latest packages on new nodes. - Setup Ceph, storages, backups, users...
  • VictorSTS
    VictorSTS replied to the thread Subscriptions and new hardware.
    @niteshadow While technically you could, you are breaking two golden rules: Good practice dictate to have the same package versions on every node. Each server in your cluster needs its own subscription based on its specific socket count. All...
  • VictorSTS
    Seems you are mixing concepts here: the balancer MGR module doesn't do the recovery/backfill when an OSD goes IN/OUT, that is a core feature of Ceph managed by MONs and OSDs, not a MGR module. The balancer function is to spread PGs among all...
  • VictorSTS
    Correct. PVE webUI and ceph status will report that an OSD is DOWN and the Ceph status will be "WARN" as soon as any OSD is DOWN. There will be no email alerts, though. You'll have to monitor Ceph somehow or at least configure MGR alerts module...
  • VictorSTS
    Because that's the minimum ratio of OSD's that will be kept IN no matter what, not the ratio that decides when to trigger a rebalance, which happens as soon as an OSD is marked OUT.
  • VictorSTS
    In a 3 node cluster with size=3, min_size=2 and the default replica rule, Ceph won't rebalance anything if one node fails. To comply with the default rule "three replicas en three OSD located in three different servers" you need 3 servers, if...
  • VictorSTS
    VictorSTS replied to the thread File restore and symlinks.
    Well, I didn't said it was solved, but checking the bug report 4995, "Improve file restore .zip format to include symlinks", seems to be solved with v4.0.19 of PBS [1]. That means that it should work when restoring from PBS. Doesn't mention it...
  • VictorSTS
    You need to understand how backup works. Specifically in stop mode: VM is running PVE tries to do a shutdown of the VM: If QEMU Agent is configured and running, shutdown signal is issued with it. Else, sends an ACPI signal which guest OS may or...
  • VictorSTS
    I mean a "RAID0 of two RAIDz2 vdevs". Something like this: zpool create tank \ raidz2 disk1 disk2 disk3 disk4 disk5 disk6 disk7 \ raidz2 disk8 disk9 disk10 disk11 disk12 disk13 disk14 That's true, as I mentioned above: That happens...
  • VictorSTS
    Good point!! I did assume OP had at least a mirror of NVMe for special device. Should be a mirror of 3 to give it the same redundancy as the HDD part. None of those settings will really help to increase performance on your big zpool on verify...
  • VictorSTS
    Find out yourself with your hardware: apt install sysstat Start two terminals: iostat -dx 2 top -H Increase readers until the load on iostat for every disk is ~90% (or some lower limit to leave headroom for other activities on the same zpool)...
  • VictorSTS
    VictorSTS replied to the thread Proxmox update from 8.4 to 9.
    You must upgrade first to latest 8.4 (think it's 8.14.17 at time of writing) and Ceph 19.2.3 (Ceph Reef isn't supported in PVE9). It's clearly stated in the upgrade docs [1]. The Ceph upgrade docs are here [2]. Once you are in latest 8.4.x, and...
  • VictorSTS
    Yes, can be run on a live VM in the sense that the command runs, i.e. doesn't check if the given rbd image as an owner/lock, so we can suppose it's safe. As you mention, Ceph docs doesn't explicitly tell if it can be run live or not. That said, I...
  • VictorSTS
    Problem could be that your "trim" stopped working at some point for some reason (i.e. discard wasn't ticked in the VMs disk configuration) and even if fstrim tries to discard the whole free space, the underlying storage stack will only act on...
  • VictorSTS
    For Windows to properly report used memory to PVE you need: - Virtio Balloon driver and service installed and running (installed and enabled automatically with the VirtIO ISO installer). Don't confuse this with QEMU Agent, which does different...
  • VictorSTS
    IIRC, unfortunately pre/post migration hook scripts aren't implemented yet [1]. The source PVE will have no clue the VM is no longer running in it. [1] https://bugzilla.proxmox.com/show_bug.cgi?id=1996
  • VictorSTS
    Hi @RoxyProxy, Since questions around pricing, company future, and so on came up, we felt it was appropriate to chime in. First, we'd like to thank the Blockbridge customers earlier in the thread for sharing their thoughts. To be honest...
  • VictorSTS
    At https://packages.debian.org/forky/amd64/zfsutils-linux/filelist I can see there are /usr/bin/zarcstat and /usr/bin/zarcsummary Maybe those were renamed in the newer version? What about man zarcstat ? P.S. Indeed...