Search results

  1. gurubert

    CEPH cache disk

    The total capacity of the cluster is defined as the sum of all OSDs. This number only changes when you add or remove disks. Do not confuse that with the maximum available space for pools which depends on replication factor or erasure code settings and currently used capacity.
  2. gurubert

    Ceph DB/WAL on SSD

    So you will lose 25% capacity in case of a dead node. Make sure to set the nearfull ratio to 0.75 so that you get a warning when OSDs have less than 25% free space. https://bennetgallein.de/tools/ceph-calculator
  3. gurubert

    Proxmox Ceph Cluster problems after Node crash

    Do not run pools with size=2 and min_size=1. You will lose data.
  4. gurubert

    Datacenter und/oder Cluster mit local storage only

    Ich kenne das Problem nur von OCFS2.
  5. gurubert

    Ceph DB/WAL on SSD

    This is correct. But depending on how many nodes you have this is not critical.
  6. gurubert

    Empty Ceph pool still uses storage

    The rbd_data.* objects seem to be leftovers. As long as there are no rbd_id.* and rbd_header.* objects there are no RBD images in the pool any more. The easiest way (and when you are really sure) would be to just delete the whole pool.
  7. gurubert

    Datacenter und/oder Cluster mit local storage only

    Seit 6.14 ist aio=threads nicht mehr notwendig.
  8. gurubert

    strange ceph osd issue

    Buy a non-defective enclosure.
  9. gurubert

    Ceph Storage Unknown Status Error

    You need to configure the old IP addresses from the Ceph public_network on the interfaces before you can do anything with the Ceph cluster.
  10. gurubert

    Ceph Storage Unknown Status Error

    You cannot just change IPs in ceph.conf. The first step is to add the new network to the Ceph public_neteork setting, then add new MONs with the new IPs to the cluster and after that remove the old MONs. Only after that was successful the old network can be removed from public_network and the...
  11. gurubert

    Custom Rules - Ceph cluster

    You can change the crush_rule for a pool. This will not cause issues for the VMs except a maybe slower performance during the time the cluster reorganizes the data.
  12. gurubert

    Ceph Storage question

    You will only lose the affected PGs and their objects. This will lead to corrupted files (when the data pool is affected) or a corrupted filesystem (if the metadata pool is affected). Depending on which directory is corrupted you may not be able to access a large part of the CephFS any more...
  13. gurubert

    Ceph rebuild on a cluster that borked after ip change?

    You may be able to extract the cluster map from the OSDs following this procedure: https://docs.ceph.com/en/squid/rados/troubleshooting/troubleshooting-mon/#recovery-using-osds But as you also changed the IP addresses you will have to change them manually in the MON map before being able to...
  14. gurubert

    External Ceph Pool size limit and best practice

    IMHO you do not need pool separation between VMs for security reasons. You may want to configure multiple pools for quota or multiple Proxmox clusters. Or if you want to set defferent permissions for users in Proxmox. AFAIK Proxmox does not show the quota max_size value.
  15. gurubert

    Wieso gibt es eigentlich keinen Konfigurations import/export?

    Bau die Systeme mit Ansible o.ä. auf und die Config liegt eh schon "extern".
  16. gurubert

    Support for clustering using wireguard

    Keep in mind that clustering only works with latencies below 5 milliseconds.
  17. gurubert

    Proxmox vlan on trunk switch interface

    If vmbr0 is VLAN-aware.
  18. gurubert

    ceph osd failure alert

    It's hard to tell from afar. Try it. And maybe you should move the conversation to the Checkmk forum. https://forum.checkmk.com/
  19. gurubert

    ceph osd failure alert

    Does the host mepprox01 have the Checkmk agent installed? Is it configured to query the agent?
  20. gurubert

    ceph osd failure alert

    What version of Checkmk are you running? Starting with 2.4 my extension was incorporated upstream and does not need to be installed separately any more. The mk_ceph.py agent plugin (for Python 3) needs to be deployed to /usr/lib/check_mk_agent/plugins on all Ceph nodes, not on the monitoring...