Search results

  1. N

    CEPH advise

    I would choose the one with better cooling, everything else is pretty much the same. ON the other hand, i have one customer with Samsung SSDs and 4-node CEPH,and it works okay.
  2. N

    CEPH advise

    Any enterprise SSD ,except Kingston, they lie a lot about their cache or PLP. For homelab, you could go with whatever since this is not important.
  3. N

    CEPH advise

    The approach is okay for inital cluster, you will need atlest 2.5g , and these ssds are okay for OS,not for CEPH.
  4. N

    Proxmox with Second Hand Enterprise SSDs

    I have a few customers who are using only used SSDs in CEPH cluster. Just buy a few spares, and when they get kicked out of CEPH cluster just replace them.
  5. N

    Storage best practices for large setups

    Big deployments are ceph storages > 100TB i think in my case. We usually start with 10g, but now are moving to 40g, and probably next year to 100g ,because the equipment is finally afordable. For corosync 1gb is enough, and everything is redundant through bonding . Stability is okay, i don't...
  6. N

    Storage best practices for large setups

    For big deployments in my case, i only use CEPH, with redundant switches. Works flawlessly. But if you bought vmware-only storage than this is maybe not feasible for you.
  7. N

    Network Traffic Monitor - What the best?

    Some of those things you get from Netflow Analyzer, some of the from NMS. I work for NetVizura, but you can install pretty much any Netflow appliance and you will get those results. There are two options for export, something like probe eq softflowd or different way if you use OvS. Some guides i...
  8. N

    Network Traffic Monitor - What the best?

    What kind of observability are you trying to get? Netflow or something else?
  9. N

    Creating RAIDZ2 with different hard drive sizes

    You add -f on the whole zpool create sausage
  10. N

    proxmox self service vm deployment

    There is also OSS verison: https://github.com/The-Network-Crew/Proxmox-VE-for-WHMCS
  11. N

    I/O Performance issues with Ceph

    Crucial P3s are okay for win office machines, but nothing else.
  12. N

    Ceph 19.2 Squid Stable Release and Ceph 17.2 Quincy soon to be EOL

    I've upgraded one 4-node cluster,no problems for now.
  13. N

    Is there any tool to migrate VMs from VMWARE to Proxmox?

    https://forum.proxmox.com/threads/new-import-wizard-available-for-migrating-vmware-esxi-based-virtual-machines.144023/page-20#post-708443
  14. N

    Automatic Updates

    You could update them via ansible, one by one.
  15. N

    Promox Replication DC to DRC

    If your SAN is zfs based,then yes, you could used zfs replication(storage replication in proxmox) .
  16. N

    problems with KINGSTON_SFYRD4000G disks in ceph cluster

    The problem is these Kingstons are office or gaming drives, they are not inteded for serious work. Buy better drives.
  17. N

    Size of 3 node cluster Proxmox VE with NVMe

    available = 19.2 because of max3/min2 you lose 30% by default so around 12tb, and if you don't overfill it, 10tb. As for disks 6, but if all 6 disks die at the same time, it will be hard for ceph to be green, it would probably need around 30% more space(than used). Keep that in mind.
  18. N

    Size of 3 node cluster Proxmox VE with NVMe

    Available = 19.2 Usable = 12.8 and this is max. As always , around 80% not more should be filled, so 10TB lets say. Excluding node , probably the same number of disks can die.