Search results

  1. gurubert

    [TUTORIAL] FabU: can I use Ceph in a _very_ small cluster?

    There is a neat calculator at https://florian.ca/ceph-calculator/ that will show you how the set the nearfull ratio for a specific number of disks and nodes.
  2. gurubert

    Ceph Ruined My Christmas

    Check all Interfaces if the MTU has been set to 1500 again. Do not mix MTUs in the same Ethernet segment.
  3. gurubert

    Ceph min_size 1 for Elasticsearch / MySQL Clusters

    All these applications already replicate their data in the application level. They do not need a storage system that does this. Let these VMs run on local storage and you will get way better performance than Ceph with size=1.
  4. gurubert

    Using Ceph as Storage for K8S

    https://rook.io/ can also be used to integrate an external Ceph cluster into Kubernetes.
  5. gurubert

    Procedure for cycling CEPH keyrings cluster-wide

    IMHO this question should be asked on the Ceph mailing list.
  6. gurubert

    Migrating Ceph from 3 small OSDs to 1 large OSD per host?

    If you keep the smaller disks the larger ones will get 8 times the IOPS because the size is one of the factors of Ceph's algorithms to distribute the data. The NVMe disks may be able to handle that.
  7. gurubert

    Migrating Ceph from 3 small OSDs to 1 large OSD per host?

    Usually Ceph will move the data for you. Before you bring down a host drain and remove its OSDs. After swapping the disks create a new OSD on the new NVMe and Ceph will happily use it. After doing this with all nodes all your data will be on the new disks.
  8. gurubert

    Cluster suggestion

    This is why I recommend a minimum of 5 nodes for Ceph.
  9. gurubert

    Ceph Crush map question

    Just the rule does not matter if there is no pool using the rule. You could re-configure all your pools to use the first rule "replicated_rule" when you only have the same type of SSDs in the cluster, but it may cause some backfilling traffic. After that you could remove both rules that...
  10. gurubert

    Cluster suggestion

    You would need to have 5 nodes with Ceph OSDs to sustain 2 failed nodes. And even then they should not fail at the same time.
  11. gurubert

    Ceph Autoscaler

    Das klingt aber nach einem Bug, oder? Denn warum sollte in dem Feld nicht der aktuelle Wert angezeigt werden?
  12. gurubert

    Enable NVMEoF Ceph Pool With Proxmox VE managed Ceph

    Even with a cephadm orchestrated cluster the NVMe-oF feature is currently not production ready. The documentation is ahead of the released code here.
  13. gurubert

    Proxmox 3 Node Cluster with CEPH - proper network configuration for VM failover?

    You cannot failover in a WAN setup with VPN. The latencies are too high to setup a HA cluster.
  14. gurubert

    Kann Ceph Flaschenhals nicht finden

    Du hast auch nur zwei OSDs in der Device-Class "large_ssd". Wie soll das funktionieren?
  15. gurubert

    Ceph Performance Question

    Because that's a single client. As you see with multiple clients performance scales up well.
  16. gurubert

    Ceph Stretch cluster - Crush rules assistance

    Sorry, sometimes the Ceph documentation describes features that have not been released yet.
  17. gurubert

    Ceph Stretch cluster - Crush rules assistance

    I would replace "firstn 3" with "firstn 0" to make the rules more generic. I do not if it's possible to remove the global stretch mode from a cluster. Maybe you should ask on the Ceph mailing list.
  18. gurubert

    Ceph Stretch cluster - Crush rules assistance

    You have enabled stretch mode for the whole cluster. This means that all PGs from all pools need to be distributed across both DCs before they become active. Quoting the docs https://docs.ceph.com/en/squid/rados/operations/stretch-mode/ : What your want can be achieved with enabling stretch...
  19. gurubert

    Echtes Mesh Protocol

    Sinn und Zweck der Routing-Protokolle ist es, eigentlich immer den kürzesten bzw "billigsten" Weg durchs Netz zu finden. Das was Du erreichen möchtest ist soweit ich weiß einfach nicht möglich.
  20. gurubert

    Proxmox VE 8.3 released, support for Squid but not recommanded?

    Ceph Squid is still a point zero release (19.2.0). That's maybe why it's not recommended.