Search results

  1. N

    cluster performance degradation

    Yeah, something like 960-1.2tb ssd for each node, if it fails you lose all osds in node. So yes, you could raid them up in mirror.
  2. N

    pve-zsync: still supported?

    Yes, works okay if you need something for a remote sync.
  3. N

    cluster performance degradation

    Use enterprise disk per node and store db/wal of each hdd on it. This is the best you could get.
  4. N

    cluster performance degradation

    He wanted to make this easier for you,but there is no easy way out, i will sum his recommendations: 1) lower number of replicas - this just writes to 2 instead of 3 nodes, but this makes your cluster fragile to dying 2) use mirrored ssds for db/wal - in his case buy consumer grade to offload...
  5. N

    Proxmox kernel vs Debian 12 Stock - ramifications of staying with Stock

    Yeah, SDDL is compatible with kernel, but kernel isnt compatible with it(GPL2).
  6. N

    cluster performance degradation

    The ceph/osd shows everything you need, so everything is okay. the hdd's have low random IOPS, and this is why when you have 70tb written to them you have low speed,and big latency. There are two options, adding db/wal onto ssd/nvme, or replace hdds with enterprise ssds.
  7. N

    Proxmox kernel vs Debian 12 Stock - ramifications of staying with Stock

    Not necessarily, if you use ext4 local disks, or lets say hardware raid, you dont need it.
  8. N

    Ceph min_size 1 for Elasticsearch / MySQL Clusters

    For elastic, replication doesnt get you failover, but you also get higher read speed because you are getting back results from two nodes, but in your case maybe it makes sense to set replication to 0? And of course get snapshots of indices. As always it depends on what you are storing and...
  9. N

    cluster performance degradation

    Optimal is 10g for ceph network, and for corosync, management enough is 1g.
  10. N

    Proxmox with Second Hand Enterprise SSDs

    For proxmox i would use : Samsung, Micron, Intel and probably Toshiba/Kioxia , of enterprise flavour. This is it. Of course, have backups,have redundancy and this is it.
  11. N

    cluster performance degradation

    That is also okay, maybe you didn;t study proxmox and ceph in depth to see your limits.
  12. N

    cluster performance degradation

    The filling and syncing of the disks degrades over time.
  13. N

    A proxmox install script for Vaultwarden

    Yeah,scripts works okay, nevermind the date. It pulls from dani ,and builds .deb files. But yeah , you would need to install nginx as a reverse proxy.
  14. N

    cluster performance degradation

    Use db/wal ssds to lower a bit hdd latency. But that can fix it just a bit. Usually i would say use enterprise ssds.
  15. N

    A proxmox install script for Vaultwarden

    Why don't you just install deb file from vaultwarden? https://github.com/greizgh/vaultwarden-debian
  16. N

    cluster performance degradation

    Corosync has an option for failover link itself, no need for active-backup: https://forum.proxmox.com/threads/configuring-dual-corosync-networks.104991/ but if management and something else is on this network,then you should separate it.
  17. N

    cluster performance degradation

    Fist question, why are you using active-backup on corosync link? As for the db/wal on faster disks read here: https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster
  18. N

    cluster performance degradation

    The latency is really high, i would add one ssd per node and more wal/db onto them.
  19. N

    cluster performance degradation

    Tell us more about ceph configuration, network, disks etc.
  20. N

    Proxmox Datacenter Manager - First Alpha Release

    For now looks good with 3 clusters, testing migration :)