Search results

  1. N

    cluster performance degradation

    The filling and syncing of the disks degrades over time.
  2. N

    A proxmox install script for Vaultwarden

    Yeah,scripts works okay, nevermind the date. It pulls from dani ,and builds .deb files. But yeah , you would need to install nginx as a reverse proxy.
  3. N

    cluster performance degradation

    Use db/wal ssds to lower a bit hdd latency. But that can fix it just a bit. Usually i would say use enterprise ssds.
  4. N

    A proxmox install script for Vaultwarden

    Why don't you just install deb file from vaultwarden? https://github.com/greizgh/vaultwarden-debian
  5. N

    cluster performance degradation

    Corosync has an option for failover link itself, no need for active-backup: https://forum.proxmox.com/threads/configuring-dual-corosync-networks.104991/ but if management and something else is on this network,then you should separate it.
  6. N

    cluster performance degradation

    Fist question, why are you using active-backup on corosync link? As for the db/wal on faster disks read here: https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster
  7. N

    cluster performance degradation

    The latency is really high, i would add one ssd per node and more wal/db onto them.
  8. N

    cluster performance degradation

    Tell us more about ceph configuration, network, disks etc.
  9. N

    Proxmox Datacenter Manager - First Alpha Release

    For now looks good with 3 clusters, testing migration :)
  10. N

    CEPH advise

    I would choose the one with better cooling, everything else is pretty much the same. ON the other hand, i have one customer with Samsung SSDs and 4-node CEPH,and it works okay.
  11. N

    CEPH advise

    Any enterprise SSD ,except Kingston, they lie a lot about their cache or PLP. For homelab, you could go with whatever since this is not important.
  12. N

    CEPH advise

    The approach is okay for inital cluster, you will need atlest 2.5g , and these ssds are okay for OS,not for CEPH.
  13. N

    Proxmox with Second Hand Enterprise SSDs

    I have a few customers who are using only used SSDs in CEPH cluster. Just buy a few spares, and when they get kicked out of CEPH cluster just replace them.
  14. N

    Storage best practices for large setups

    Big deployments are ceph storages > 100TB i think in my case. We usually start with 10g, but now are moving to 40g, and probably next year to 100g ,because the equipment is finally afordable. For corosync 1gb is enough, and everything is redundant through bonding . Stability is okay, i don't...
  15. N

    Storage best practices for large setups

    For big deployments in my case, i only use CEPH, with redundant switches. Works flawlessly. But if you bought vmware-only storage than this is maybe not feasible for you.
  16. N

    Network Traffic Monitor - What the best?

    Some of those things you get from Netflow Analyzer, some of the from NMS. I work for NetVizura, but you can install pretty much any Netflow appliance and you will get those results. There are two options for export, something like probe eq softflowd or different way if you use OvS. Some guides i...
  17. N

    Network Traffic Monitor - What the best?

    What kind of observability are you trying to get? Netflow or something else?
  18. N

    Creating RAIDZ2 with different hard drive sizes

    You add -f on the whole zpool create sausage
  19. N

    proxmox self service vm deployment

    There is also OSS verison: https://github.com/The-Network-Crew/Proxmox-VE-for-WHMCS