Hi Folks,
We have 100G Ethernet (Mellanox HP SFP28 100G Cards) on each Node of our Proxmox Nodes (Running AMD EPYC Processors, so PCI-E Bandwidth is not a problem)
We have 6.4TB SN630 Enterprise NVME SSD in the nodes on a CEPH Cluster so this is not a bottleneck either as its on the 100G Link too.
If we do a VM to VM copy (with Virtio Network card and the latest drivers installed).
The speed is only at 7-8 Gbps (Gigabits) on a iptraf test on a Windows and Linux mix and 13-14 Gbps on a Linux / Linux VM mix.
This is no way near the 100G or even 25G (as 100G is basically 25G x 4).
Any ideas on how we can get good speeds on this kind of hardware.
By the way we have a Arista 7060CX 32 Port 100GBE Switch
We have 100G Ethernet (Mellanox HP SFP28 100G Cards) on each Node of our Proxmox Nodes (Running AMD EPYC Processors, so PCI-E Bandwidth is not a problem)
We have 6.4TB SN630 Enterprise NVME SSD in the nodes on a CEPH Cluster so this is not a bottleneck either as its on the 100G Link too.
If we do a VM to VM copy (with Virtio Network card and the latest drivers installed).
The speed is only at 7-8 Gbps (Gigabits) on a iptraf test on a Windows and Linux mix and 13-14 Gbps on a Linux / Linux VM mix.
This is no way near the 100G or even 25G (as 100G is basically 25G x 4).
Any ideas on how we can get good speeds on this kind of hardware.
By the way we have a Arista 7060CX 32 Port 100GBE Switch