100g

  1. J

    confused...CEPH delivering same performance on 100G as it did on 1G test

    so...been using proxmox for many years (almost 20). have several existing server clusters. Just building a new server stack. 6x Dell R760 768G ram, dual 32 core cpu. each server with 11 850G 24GPS SA SCSI SSD, 4x 1G nic plus 2X dual 100G nic The drives are 3,400 MB/s capable Build cluster...
  2. A

    100G network card and interrupt handling (ksoftirqd process loads a single CPU core at 100%)

    I have a test server with a Mellanox ConnectX-5 (MT27800 Family, 100GbE, dual-port QSFP28) network card installed. The server runs the Proxmox 8.4. For testing purposes, I created a virtual machine with Ubuntu 22.04 (VM 1). I allocated one port of the Mellanox network card to the virtual machine...
  3. J

    Mellanox ConnectX-6 (100GbE) Performance Issue – Only Reaching ~19Gbps Between Nodes

    Hello everyone, We’ve been encountering an issue during some pre-production testing, and I’d like to ask for your input — it seems likely to be a configuration-related problem, but we’re beginning to suspect there might be a hardware factor involved as well. The setup involves connecting two...
  4. helojunkie

    New Servers w/100G Trunks, Should I still use a separate Corosync network?

    So, as the title says, I am deploying all new Proxmox servers to replace our aging fleet of 2U Dells. Currently, I have a 10G trunk for all of my normal VLANs and a separate 10G connection specific to only Corosync VLAN traffic. My new servers have 4 x 10G NICs and 2 x 100G NICs each. I was...