Search results

  1. gurubert

    Cluster suggestion

    This is why I recommend a minimum of 5 nodes for Ceph.
  2. gurubert

    Ceph Crush map question

    Just the rule does not matter if there is no pool using the rule. You could re-configure all your pools to use the first rule "replicated_rule" when you only have the same type of SSDs in the cluster, but it may cause some backfilling traffic. After that you could remove both rules that...
  3. gurubert

    Cluster suggestion

    You would need to have 5 nodes with Ceph OSDs to sustain 2 failed nodes. And even then they should not fail at the same time.
  4. gurubert

    Ceph Autoscaler

    Das klingt aber nach einem Bug, oder? Denn warum sollte in dem Feld nicht der aktuelle Wert angezeigt werden?
  5. gurubert

    Enable NVMEoF Ceph Pool With Proxmox VE managed Ceph

    Even with a cephadm orchestrated cluster the NVMe-oF feature is currently not production ready. The documentation is ahead of the released code here.
  6. gurubert

    Proxmox 3 Node Cluster with CEPH - proper network configuration for VM failover?

    You cannot failover in a WAN setup with VPN. The latencies are too high to setup a HA cluster.
  7. gurubert

    Kann Ceph Flaschenhals nicht finden

    Du hast auch nur zwei OSDs in der Device-Class "large_ssd". Wie soll das funktionieren?
  8. gurubert

    Ceph Performance Question

    Because that's a single client. As you see with multiple clients performance scales up well.
  9. gurubert

    Ceph Stretch cluster - Crush rules assistance

    Sorry, sometimes the Ceph documentation describes features that have not been released yet.
  10. gurubert

    Ceph Stretch cluster - Crush rules assistance

    I would replace "firstn 3" with "firstn 0" to make the rules more generic. I do not if it's possible to remove the global stretch mode from a cluster. Maybe you should ask on the Ceph mailing list.
  11. gurubert

    Ceph Stretch cluster - Crush rules assistance

    You have enabled stretch mode for the whole cluster. This means that all PGs from all pools need to be distributed across both DCs before they become active. Quoting the docs https://docs.ceph.com/en/squid/rados/operations/stretch-mode/ : What your want can be achieved with enabling stretch...
  12. gurubert

    Echtes Mesh Protocol

    Sinn und Zweck der Routing-Protokolle ist es, eigentlich immer den kürzesten bzw "billigsten" Weg durchs Netz zu finden. Das was Du erreichen möchtest ist soweit ich weiß einfach nicht möglich.
  13. gurubert

    Proxmox VE 8.3 released, support for Squid but not recommanded?

    Ceph Squid is still a point zero release (19.2.0). That's maybe why it's not recommended.
  14. gurubert

    Proxmox Repository List of CDN Hosts

    IMHO this would be easier in that situation than trying to play catch with changing CDN IPs.
  15. gurubert

    VM poor storage performance

    Single thread performance like you have with one disk inside one VM will always be disappointing, especially with small block sizes. You can just throw faster hardware at the problem. BTW: Your rados bench uses 16 parallel threads and a block size of 4MB. This will show you nearly the maximum...
  16. gurubert

    Import OVA: working storage 'cephfs' does not support 'images' content type or is not file based.

    Have you added 'images' in the storage config forCephFS? After importing the image you still can migrate it to the RBD pool. Or select the target storage in the import dialog.
  17. gurubert

    VM poor storage performance

    Get multiple OSDs per node and 25G network. You are running the bare minimum for a working system.
  18. gurubert

    Proxmox\Ceph Multipath strategy?

    Usually bonding is used with LACP and a corresponding switch stack to provide network redundancy. As Ceph uses multiple TCP connections both physical links are utilized.
  19. gurubert

    Proxmox 8.2 welsche Kernel möglich?

    Es muss kein aufwendiges shared Storage dranhängen, um den Fehler zu reproduzieren. Ich habe die notwendigen Schritte hier dokumentiert: http://gurubert.de/ocfs2_io_uring.html