100gbit

  1. H

    CEPH traffic on Omnipath or Infiniband NICs and switches?

    Hi all, We are looking in to deploying a new refurbished NVME HCI Ceph Proxmox cluster. At this point we look at 7 nodes, each with 2 NVME OSD drives, with expansion for 2 NVME OSD's more. As we would quickly saturate a 25GbE link we should be looking in to 40/50/100 GbE links and switches...
  2. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    Hi everbody, we are currently in the process of replacing our VMware ESXi NFS Netapp setup with a ProxMox Ceph configuration. We purchased 8 nodes with the following configuration: - ThomasKrenn 1HE AMD Single-CPU RA1112 - AMD EPYC 7742 (2,25 GHz, 64-Core, 256 MB) - 512 GB RAM - 2x 240GB SATA...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!