Search results

  1. N

    Ceph Ruined My Christmas

    You have separate public and cluster ceph networks, are they both available? do you have a need for cluster network? if not use just public_network. Also, there are a lot of osds down, i would check the disks if they are working as expected(hddsentinel?) .Moreover, why so many pools? Usually one...
  2. N

    Ceph Ruined My Christmas

    Cluster nodes are different from ceph cluster, can you give us images from node/CEPH, if you can from one node,but all views?
  3. N

    cluster performance degradation

    2 nodes and a quorom if you don't want full third node.
  4. N

    cluster performance degradation

    Usually now (in the west) , you use 3.xTB and 7.xTB ssds or nvmes, and then you can get good density and performance.
  5. N

    cluster performance degradation

    Three is okay for start, but those hdd and this use case is a engineer error.
  6. N

    cluster performance degradation

    This is not true,if you have 4 nodes, you can only lose one, since 3 is needed for minimum quoroum.
  7. N

    cluster performance degradation

    1. Yes 1min is replication time, but you need to spin up machine on a different node when first one dies, right? 2. For ceph, 3 is min and it works great if you have fast disks. With more nodes it is even more great :)
  8. N

    cluster performance degradation

    To offload ceph writes to db/wal .
  9. N

    cluster performance degradation

    Yeah, something like 960-1.2tb ssd for each node, if it fails you lose all osds in node. So yes, you could raid them up in mirror.
  10. N

    pve-zsync: still supported?

    Yes, works okay if you need something for a remote sync.
  11. N

    cluster performance degradation

    Use enterprise disk per node and store db/wal of each hdd on it. This is the best you could get.
  12. N

    cluster performance degradation

    He wanted to make this easier for you,but there is no easy way out, i will sum his recommendations: 1) lower number of replicas - this just writes to 2 instead of 3 nodes, but this makes your cluster fragile to dying 2) use mirrored ssds for db/wal - in his case buy consumer grade to offload...
  13. N

    Proxmox kernel vs Debian 12 Stock - ramifications of staying with Stock

    Yeah, SDDL is compatible with kernel, but kernel isnt compatible with it(GPL2).
  14. N

    cluster performance degradation

    The ceph/osd shows everything you need, so everything is okay. the hdd's have low random IOPS, and this is why when you have 70tb written to them you have low speed,and big latency. There are two options, adding db/wal onto ssd/nvme, or replace hdds with enterprise ssds.
  15. N

    Proxmox kernel vs Debian 12 Stock - ramifications of staying with Stock

    Not necessarily, if you use ext4 local disks, or lets say hardware raid, you dont need it.
  16. N

    Ceph min_size 1 for Elasticsearch / MySQL Clusters

    For elastic, replication doesnt get you failover, but you also get higher read speed because you are getting back results from two nodes, but in your case maybe it makes sense to set replication to 0? And of course get snapshots of indices. As always it depends on what you are storing and...
  17. N

    cluster performance degradation

    Optimal is 10g for ceph network, and for corosync, management enough is 1g.
  18. N

    Proxmox with Second Hand Enterprise SSDs

    For proxmox i would use : Samsung, Micron, Intel and probably Toshiba/Kioxia , of enterprise flavour. This is it. Of course, have backups,have redundancy and this is it.
  19. N

    cluster performance degradation

    That is also okay, maybe you didn;t study proxmox and ceph in depth to see your limits.