Search results

  1. gurubert

    Ceph + Cloud-Init Troubleshooting

    cloud-init configures an account and the network settings inside a newly cloned VM. What is the connection to CephFS or RBD that is not working for you?
  2. gurubert

    [TUTORIAL] FabU: can I use Ceph in a _very_ small cluster?

    Yes, because the assumed failure zone is the host. If just an OSD fails it should be replaced. In small clusters the time to replace a failed disk is more crucial than in larger clusters where the data is more easily replicated to the other OSDs in the nodes.
  3. gurubert

    [TUTORIAL] FabU: can I use Ceph in a _very_ small cluster?

    There is a neat calculator at https://florian.ca/ceph-calculator/ that will show you how the set the nearfull ratio for a specific number of disks and nodes.
  4. gurubert

    Ceph Ruined My Christmas

    Check all Interfaces if the MTU has been set to 1500 again. Do not mix MTUs in the same Ethernet segment.
  5. gurubert

    Ceph min_size 1 for Elasticsearch / MySQL Clusters

    All these applications already replicate their data in the application level. They do not need a storage system that does this. Let these VMs run on local storage and you will get way better performance than Ceph with size=1.
  6. gurubert

    Using Ceph as Storage for K8S

    https://rook.io/ can also be used to integrate an external Ceph cluster into Kubernetes.
  7. gurubert

    Procedure for cycling CEPH keyrings cluster-wide

    IMHO this question should be asked on the Ceph mailing list.
  8. gurubert

    Migrating Ceph from 3 small OSDs to 1 large OSD per host?

    If you keep the smaller disks the larger ones will get 8 times the IOPS because the size is one of the factors of Ceph's algorithms to distribute the data. The NVMe disks may be able to handle that.
  9. gurubert

    Migrating Ceph from 3 small OSDs to 1 large OSD per host?

    Usually Ceph will move the data for you. Before you bring down a host drain and remove its OSDs. After swapping the disks create a new OSD on the new NVMe and Ceph will happily use it. After doing this with all nodes all your data will be on the new disks.
  10. gurubert

    Cluster suggestion

    This is why I recommend a minimum of 5 nodes for Ceph.
  11. gurubert

    Ceph Crush map question

    Just the rule does not matter if there is no pool using the rule. You could re-configure all your pools to use the first rule "replicated_rule" when you only have the same type of SSDs in the cluster, but it may cause some backfilling traffic. After that you could remove both rules that...
  12. gurubert

    Cluster suggestion

    You would need to have 5 nodes with Ceph OSDs to sustain 2 failed nodes. And even then they should not fail at the same time.
  13. gurubert

    Ceph Autoscaler

    Das klingt aber nach einem Bug, oder? Denn warum sollte in dem Feld nicht der aktuelle Wert angezeigt werden?
  14. gurubert

    Enable NVMEoF Ceph Pool With Proxmox VE managed Ceph

    Even with a cephadm orchestrated cluster the NVMe-oF feature is currently not production ready. The documentation is ahead of the released code here.
  15. gurubert

    Proxmox 3 Node Cluster with CEPH - proper network configuration for VM failover?

    You cannot failover in a WAN setup with VPN. The latencies are too high to setup a HA cluster.
  16. gurubert

    Kann Ceph Flaschenhals nicht finden

    Du hast auch nur zwei OSDs in der Device-Class "large_ssd". Wie soll das funktionieren?
  17. gurubert

    Ceph Performance Question

    Because that's a single client. As you see with multiple clients performance scales up well.
  18. gurubert

    Ceph Stretch cluster - Crush rules assistance

    Sorry, sometimes the Ceph documentation describes features that have not been released yet.
  19. gurubert

    Ceph Stretch cluster - Crush rules assistance

    I would replace "firstn 3" with "firstn 0" to make the rules more generic. I do not if it's possible to remove the global stretch mode from a cluster. Maybe you should ask on the Ceph mailing list.
  20. gurubert

    Ceph Stretch cluster - Crush rules assistance

    You have enabled stretch mode for the whole cluster. This means that all PGs from all pools need to be distributed across both DCs before they become active. Quoting the docs https://docs.ceph.com/en/squid/rados/operations/stretch-mode/ : What your want can be achieved with enabling stretch...