10 Gb/s switch latency (microseconds difference): does it matter?

lucaferr

Renowned Member
Jun 21, 2011
71
8
73
Hi! I'm comparing 2 switches with 10 Gb/s ports, tu build a small 3-nodes cluster with with PVE+Ceph. Ceph will run on NVMe drives. One switch has a latency of 2.8 microseconds at 10 Gb/s, while the other (which is cheaper) has 4.8 microseconds at 10 Gb/s.
Does this difference matters at all, talking about performance of the Ceph cluster?
Thanks!
 
Hi! I'm comparing 2 switches with 10 Gb/s ports, tu build a small 3-nodes cluster with with PVE+Ceph. Ceph will run on NVMe drives. One switch has a latency of 2.8 microseconds at 10 Gb/s, while the other (which is cheaper) has 4.8 microseconds at 10 Gb/s.
Does this difference matters at all, talking about performance of the Ceph cluster?
Thanks!
Hi, how do you test this? Just for interrest :D
 
Hi, how do you test this? Just for interrest :D
I just downloaded the datasheets :)

Ok guys, I googled and thought about it a bit (maybe I should have done it sooner). As I told you, the two switches differ by 2 microseconds. Then I figured out that:
  • A SATA SSD has a read latency of not less than 30 ms (30 milliseconds --> 30000 microseconds). So, if your Ceph is based on SATA SSDS, those 2 microseconds are absolutely insignificant
  • A good NVMe drive (like Samsung 970 Pro) has a read latency of not less than 30 microseconds. So in this case 2 microseconds would add 6% in the best case scenario (which becomes around 3% on average)
  • Latency seen by Ceph using only NVMe drives and 10 Gb/s network is still measured in milliseconds (so always > 1000 microseconds)
These facts make me think that the switching latency is never going to be significative. So probably I'll go for the cheaper 10 Gb/s switch. Please correct me if you think that some of my assumptions are wrong.

REF: https://www.techpowerup.com/review/samsung-970-evo-ssd-500-gb/5.html
REF: https://techreport.com/blog/3467943... have latency in the 30-100 microsecond range
 
Can you tell models of the switches you are considering?
For this particular cluster, which is very small (3 nodes) I was considering a 8 port managed 10 Gb/s (Netgear XS708T) vs a 5 port unmanaged 10 Gb/s (Netgear XS505M). The last one is cheaper and has 4.8 microseconds latency, while the first one costs more and has 2.8 microseconds latency.
I've used a XS708T in a 5-node PVE+Ceph cluster for another customer for years and I'm really satisfied with it.
 
you not only have network latency, you have the cpu time to process io both server side and client side.
So use fast frequency cpu first, this can give you some milliseconds. you don't really care about 2 microsecond different on the switch side.
This is mainly if you need to do a lot of small iops.
if you need big througput (video stream for example), you don't care too much about latency.


A good NVMe drive (like Samsung 970 Pro)
I don't known about nvme, but samsung ssd pro sucks with ceph. You really need to use datacenter ssd with fast sync writes.
 
Samsung 980 pro + x570 + Ryzen 5600x = 0,03ms

Tbh, I don't ever knew or seen somewhere in the specs of any switch, how fast they are
Dunno if it makes any difference at all.

The only thing i know is, that 10gb/s has much lower latency, as a 1gb/s port...
So 10gb/s isn't only faster in transfer, it's much faster switching too.

So you should only take this in your decision. And don't compare switches... Just get the cheapest 10gb/s switch with lacp/vlan/snmp support and you are good.

Cheers
 
Thank you for all your considerations. I'll go for the cheaper 10 Gb/s switch for this small cluster (the unmanaged 5 ports).
Regarding 960/970 Pro SSDs, I've built much bigger clusters (10 nodes, hundreds of VMs) with Ceph using only Samsung NVMe prosumer drives and I'm really satisfied with their performance and reliability (that cluster has been running for years).
There is a difference between a pure 4K benchmark and real usage for web servers, for example. Also, that thread is about ZFS which works differently.
I agree that if you have a very write-intensive constant workload you should go for enterprise SSDs. But this is not the case using webservers. Our monitoring system logs disks performance every 30 seconds: I/O% of individual NVMe drives rarely exceeds 5% ;)
 
Last edited:
Thank you for all your considerations. I'll go for the cheaper 10 Gb/s switch for this small cluster (the unmanaged 5 ports).
Regarding 960/970 Pro SSDs, I've built much bigger clusters (10 nodes, hundreds of VMs) with Ceph using only Samsung NVMe prosumer drives and I'm really satisfied with their performance and reliability (that cluster has been running for years).
There is a difference between a pure 4K benchmark and real usage for web servers, for example. Also, that thread is about ZFS which works differently.
I agree that if you have a very write-intensive constant workload you should go for enterprise SSDs. But this is not the case using webservers. Our monitoring system logs disks performance every 30 seconds: I/O% of individual NVMe drives rarely exceeds 5% ;)
Which Samsung drives are you using?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!