100gbit

  1. R

    Best way to separate traffic and configure the PVE network

    Hi, we're building an 4 node PVE cluster with NVME Ceph storage. Available Nics: We have several nics available: Nic1: 2 x 10G + 2 x 1G Nic2: 2 x 10G Nic3: 2 x 100G Traffic/Networks: Now we need (I think) the following traffic separations: PVE Management PVE Cluster & Corosync Ceph (public)...
  2. H

    CEPH traffic on Omnipath or Infiniband NICs and switches?

    Hi all, We are looking in to deploying a new refurbished NVME HCI Ceph Proxmox cluster. At this point we look at 7 nodes, each with 2 NVME OSD drives, with expansion for 2 NVME OSD's more. As we would quickly saturate a 25GbE link we should be looking in to 40/50/100 GbE links and switches...
  3. R

    Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    Hi everbody, we are currently in the process of replacing our VMware ESXi NFS Netapp setup with a ProxMox Ceph configuration. We purchased 8 nodes with the following configuration: - ThomasKrenn 1HE AMD Single-CPU RA1112 - AMD EPYC 7742 (2,25 GHz, 64-Core, 256 MB) - 512 GB RAM - 2x 240GB SATA...