Search results

  1. gurubert

    strange ceph osd issue

    Buy a non-defective enclosure.
  2. gurubert

    Ceph Storage Unknown Status Error

    You need to configure the old IP addresses from the Ceph public_network on the interfaces before you can do anything with the Ceph cluster.
  3. gurubert

    Ceph Storage Unknown Status Error

    You cannot just change IPs in ceph.conf. The first step is to add the new network to the Ceph public_neteork setting, then add new MONs with the new IPs to the cluster and after that remove the old MONs. Only after that was successful the old network can be removed from public_network and the...
  4. gurubert

    Custom Rules - Ceph cluster

    You can change the crush_rule for a pool. This will not cause issues for the VMs except a maybe slower performance during the time the cluster reorganizes the data.
  5. gurubert

    Ceph Storage question

    You will only lose the affected PGs and their objects. This will lead to corrupted files (when the data pool is affected) or a corrupted filesystem (if the metadata pool is affected). Depending on which directory is corrupted you may not be able to access a large part of the CephFS any more...
  6. gurubert

    Ceph rebuild on a cluster that borked after ip change?

    You may be able to extract the cluster map from the OSDs following this procedure: https://docs.ceph.com/en/squid/rados/troubleshooting/troubleshooting-mon/#recovery-using-osds But as you also changed the IP addresses you will have to change them manually in the MON map before being able to...
  7. gurubert

    External Ceph Pool size limit and best practice

    IMHO you do not need pool separation between VMs for security reasons. You may want to configure multiple pools for quota or multiple Proxmox clusters. Or if you want to set defferent permissions for users in Proxmox. AFAIK Proxmox does not show the quota max_size value.
  8. gurubert

    Wieso gibt es eigentlich keinen Konfigurations import/export?

    Bau die Systeme mit Ansible o.ä. auf und die Config liegt eh schon "extern".
  9. gurubert

    Support for clustering using wireguard

    Keep in mind that clustering only works with latencies below 5 milliseconds.
  10. gurubert

    Proxmox vlan on trunk switch interface

    If vmbr0 is VLAN-aware.
  11. gurubert

    ceph osd failure alert

    It's hard to tell from afar. Try it. And maybe you should move the conversation to the Checkmk forum. https://forum.checkmk.com/
  12. gurubert

    ceph osd failure alert

    Does the host mepprox01 have the Checkmk agent installed? Is it configured to query the agent?
  13. gurubert

    ceph osd failure alert

    What version of Checkmk are you running? Starting with 2.4 my extension was incorporated upstream and does not need to be installed separately any more. The mk_ceph.py agent plugin (for Python 3) needs to be deployed to /usr/lib/check_mk_agent/plugins on all Ceph nodes, not on the monitoring...
  14. gurubert

    Ceph on HPE DL380 Gen10+ not working

    How should the second node decide that the first one is really down with only two replicas of the data? This setup will never work.
  15. gurubert

    Ceph : number of placement groups for 5+ pools on 3hosts x 1osd

    You know that Proxmox has storage live migration?
  16. gurubert

    Ceph : number of placement groups for 5+ pools on 3hosts x 1osd

    Why would you need so many pools for such a small cluster.
  17. gurubert

    FSID CLUSTER CEPH

    This is not possible as the CephX keys for the OSDs and all the other internal config will be missing. What happened to your cluster? If you still have all the OSDs you could try to restore the cluster map from the copies stored there and start a new MON with that. Read the Ceph documentation...
  18. gurubert

    WAL und RocksDB von Ceph auslagern

    Das WAL liegt üblicherweise zusammen mit der RocksDB auf einem Device. Es ist sehr unüblich, ein separates WAL-Device zu konfigurieren. Ceph nutzt immer Block-Devices und richtet sich dieses selber ein.
  19. gurubert

    WAL und RocksDB von Ceph auslagern

    Das ist eine sehr alte Empfehlung aus den Anfangstagen von BlueStore. Inzwischen ist die RocksDB auch flexibler geworden. Für RBD und CephFS Nutzung reichen 40 - 70 GB, wenn S3 dazukommt, dann dürfen es auch 300GB werden. Der RadosGW nutzt viele OMAP-Daten.
  20. gurubert

    How to install Windows 10 OS and Proxmox VE in a separate drive?

    The Proxmox installer is not built for that use case. You could install Debian alongside Windows, make it dual-boot and then install Proxmox on Debian.