Recent content by gurubert

  1. gurubert

    [SOLVED] Wie kann es sein, dass

    Die Dateien unterhalb von /var/cache mögen zum Teil volatil sein. Die Verzeichnisstruktur ist es sicher nicht. apt clean hätte unfallfrei aufgeräumt.
  2. gurubert

    Ceph NVMe-oF gateways with PVE

    NVMe-oF in Ceph works from version 20 onwards. Do not try it with reef or squid. And AFAIK you need the cephadm orchestrator to configure it.
  3. gurubert

    Ceph Squid (19.2.3) Cluster Hangs on Node Reboot - 56 NVMe OSDs - PVE 9.1.1

    No, as soon as there is no quorum any more between the MONs, i.e. the majority of MONs do not see each other, the cluster will stop working. And for the number of MONs: The cephadm orchestrator deploys 5 by default for the reasons I outlined: https://docs.ceph.com/en/latest/cephadm/services/mon/
  4. gurubert

    Ceph Squid (19.2.3) Cluster Hangs on Node Reboot - 56 NVMe OSDs - PVE 9.1.1

    The current recommendation from the Ceph project is to run 5 MONs. With only three MONs you run into a high risk situation after losing just one MON. Losing another and your cluster stops. With five MONs you can loose two and the cluster will still work.
  5. gurubert

    Ceph Squid (19.2.3) Cluster Hangs on Node Reboot - 56 NVMe OSDs - PVE 9.1.1

    Is a specific pool affected by shutting down one host? Your CRUSH rules are a wild mix.
  6. gurubert

    Ceph Squid (19.2.3) Cluster Hangs on Node Reboot - 56 NVMe OSDs - PVE 9.1.1

    Please post the output of ceph status ceph mon dump ceph config dump ceph osd df tree ceph osd crush rule dump ceph osd pool ls detail from each node: ip addr show
  7. gurubert

    Ceph Performance Problem trotz guter Ceph Benchmarks

    Die Windows-VM benutzt virtio-scsi-single?
  8. gurubert

    Recommendation for software-defined storage for Proxmox on OVH (with Veeam snapshot integration)

    You may have a look at the DRBD integration for Proxmox: https://linbit.com/blog/linstor-setup-proxmox-ve-volumes/
  9. gurubert

    Ceph OSD woes after NVMe hotplug

    Before re-inserting the NVMe you should remove the remains of the LV which will still be n. the kernel after you pulled it. Only after that the LV will be re-attached cleanly. This is clearly an edge case as normally you will pull a defective drive and replace it with a new empty one where a new...
  10. gurubert

    Ceph HEALTH_ERR

    ceph crash ls-new ceph crash info ceph crash archive are the commands you need.
  11. gurubert

    Ceph will not mount after apt update

    Updates should be made node for node and include a node reboot if necessary (new kernel, systemd etc).
  12. gurubert

    [SOLVED] Ethernet (ip a) how to figure out to which host they belong

    The VMID or the CTID is the first number in the interface name. veth129i0 is the first interface of container 129.
  13. gurubert

    Ceph - Reduced data availability: 3 pgs inactive

    These PGs are lost if all of your OSDs are online again. You will have to remove them. Look in the Ceph documentation under PG Troubleshooting.