Search results

  1. gurubert

    Proxmox with 48 nodes

    If you need more than 32 nodes for compute maybe you have outgrown Proxmox VE and should look at larger systems like OpenStack.
  2. gurubert

    Mehrere RBD Devices mit unterschiedlicher Größe in Proxmox CEPH einrichten

    Da ist tatsächlich ein Anwendungsfall für mehrere Pools und Quota. In einem Pool, der in Proxmox als Storage eingebunden wird, kann dann jedes Institut für sich mehrere VMs ablegen. In Proxmox lässt sich das dann auch verrechten.
  3. gurubert

    Mehrere RBD Devices mit unterschiedlicher Größe in Proxmox CEPH einrichten

    Das RBD ist die virtuelle Festplatte für eine VM. Dieses hat immer eine bestimmte Größe. Ein Ceph-Pool kann mehrere RBDs aufnehmen und ist grundsätzlich so groß wie der gesamte Cluster. Außer er wird mit einer Qouta versehen. Mehrere Pools teilen sich die Gesamtkapazität des Clusters. Ich...
  4. gurubert

    Ceph Migration from Nautilus to Reef: Incompatibility Issues

    You can only jump over one Ceph version when upgrading. So from 14 you can upgrade to 16 and only after that to 18. The upgrade procedures are documented on https://pve.proxmox.com/
  5. gurubert

    Proxmox Network Load Balancer?

    Proxmox is a virtualization platform, not a cloud platform.
  6. gurubert

    SDN / Ceph Private Network

    Why would you want to tunnel storage traffic through a tunnel like VXLAN?
  7. gurubert

    [SOLVED] Ceph not working / showing HEALTH_WARN

    Usually when building a Ceph cluster one starts with the MONs and not the OSDs.
  8. gurubert

    Ceph DB/WAL on SSD

    With m=1 you have the same redundancy as with size=2 and min_size=1 or in other words you have a RAID5. You will lose data in this setup. You could run with k=2 and m=2 but will still have to cope with the EC overhead (more CPU and more network communication).
  9. gurubert

    Ceph DB/WAL on SSD

    With 5 nodes you can have k=2 and m=2 which gives you 200% raw usage instead of 300% with size=3 replicated pools. But this is still a very small cluster for erasure coding.
  10. gurubert

    Ceph DB/WAL on SSD

    EC with only 4 nodes is not useful. You need at least 8 or 10 nodes to get useful k and m values for erasure coding.
  11. gurubert

    CEPH cache disk

    The total capacity of the cluster is defined as the sum of all OSDs. This number only changes when you add or remove disks. Do not confuse that with the maximum available space for pools which depends on replication factor or erasure code settings and currently used capacity.
  12. gurubert

    Ceph DB/WAL on SSD

    So you will lose 25% capacity in case of a dead node. Make sure to set the nearfull ratio to 0.75 so that you get a warning when OSDs have less than 25% free space. https://bennetgallein.de/tools/ceph-calculator
  13. gurubert

    Proxmox Ceph Cluster problems after Node crash

    Do not run pools with size=2 and min_size=1. You will lose data.
  14. gurubert

    Datacenter und/oder Cluster mit local storage only

    Ich kenne das Problem nur von OCFS2.
  15. gurubert

    Ceph DB/WAL on SSD

    This is correct. But depending on how many nodes you have this is not critical.
  16. gurubert

    Empty Ceph pool still uses storage

    The rbd_data.* objects seem to be leftovers. As long as there are no rbd_id.* and rbd_header.* objects there are no RBD images in the pool any more. The easiest way (and when you are really sure) would be to just delete the whole pool.
  17. gurubert

    Datacenter und/oder Cluster mit local storage only

    Seit 6.14 ist aio=threads nicht mehr notwendig.
  18. gurubert

    strange ceph osd issue

    Buy a non-defective enclosure.
  19. gurubert

    Ceph Storage Unknown Status Error

    You need to configure the old IP addresses from the Ceph public_network on the interfaces before you can do anything with the Ceph cluster.