Yes, 10G ethernet. I've got 3 nodes, 12 disks per node. Each node also has 2 SSDs being used for OS/mon.
You might be right that my 10G interface on Proxmox could be a bottleneck. After that point, the IO Aggregator (connects all the blades together) has a 40G trunk to the primary switch, which then has 10G connections to each Ceph node giving the Ceph cluster a 30G combined bandwidth.
I just did another set of tests directly on one Proxmox host, these are to a 2x replication pool as well so they should be real world as the replication from OSDs will be happening. Results below: