Recent content by gurubert

  1. gurubert

    Ceph Worker auf RaspberryPi dem VE hinzufügen

    Das sind schon allein zu wenig Knoten für Ceph.
  2. gurubert

    Ceph Worker auf RaspberryPi dem VE hinzufügen

    Wie sieht denn der Rest des Clusters.aus? Ich würde davon ausgehen, dass ein Pi zu wenig Leistung für ein OSD hat.
  3. gurubert

    Ceph Worker auf RaspberryPi dem VE hinzufügen

    Was soll denn auf dem Pi laufen?
  4. gurubert

    Proxmox QDevice with a 16 node cluster

    You can assign 2 or more votes to the Qdevice. This could help by providing more votes. E.g.: each PVE node has one vote and the Qdevice has 3. Total votes are 19, majority is 10. 9 PVE hosts can fail and there is still a majority of votes available. But if the Qdevice fails only 6 other PVE...
  5. gurubert

    Proxmox vuneralbilities according to Wazuh

    CVE-2026-32746 affects telnetd. Do you have telnetd installed or even enabled on any of your machines?
  6. gurubert

    OCFS2(unsupported): Frage zu Belegung

    Warum sollte Multipathing den discard-Befehl nicht weitergeben? Bzw sollte das Discard ja die qcow2-Datei im OCFS2 kleiner machen. Auf der Ebene hat das mit Multipathing zum SAN ja noch gar nichts zu tun. Wenn das SAN-Storage selber die LUN thin-provisioned macht, kann ein discard aus der VM...
  7. gurubert

    Ceph - VM with high IO wait

    By default /etc/ceph/ceph.conf is a symlink to /etc/pve/ceph.conf which makes it the same on each cluster node. AFAIK it is easier to use ceph config set to set the values in the config db for each Proxmox node. ceph config set client.HOSTNAME crush_location az=az1
  8. gurubert

    OCFS2(unsupported): Frage zu Belegung

    Würde ich so auch vermuten.
  9. gurubert

    Ceph - VM with high IO wait

    What is the network latency between the nodes, especially between the different AZs?
  10. gurubert

    tips for shared storage that 'has it all' :-)

    AFAIK NetApp has one of the fastest and most stable NFS server implementation in the industry. If you already have that I would definitely run some benchmarks on it.
  11. gurubert

    Establishing Proxmox VE as a Cross-Platform Hypervisor with Full RHEL/EL9 Ecosystem Support

    And RedHat already has a virtualization offering for their enterprise customers. I doubt that anybody really entrenched into the "Enterprise" RHEL ecosystem is looking elsewhere.
  12. gurubert

    Hostname cloud-init

    We also use cloud-init and have no issues. The hostname was always set to the name of the VM. But we use an FQDN as VM name.
  13. gurubert

    [SOLVED] Erasure Code Pool WAL and RocksDB usage

    But it will only be one Proxmox node? And you need to mount the same filesystem in multiple VMs? Then ZFS with NFS/SMB shares on Proxmox.
  14. gurubert

    [SOLVED] Erasure Code Pool WAL and RocksDB usage

    Then I would suggest to use ZFS locally. Ceph is a distributed storage system that works best with five nodes or more.
  15. gurubert

    [SOLVED] Erasure Code Pool WAL and RocksDB usage

    Just to be curious: This is a test setup?