I have a relatively small nvme Ceph cluster running on a dedicated 10gb network. RBD and CephFS performance seems to be pretty good at around 500MB per second in various synthetic benchmarks. Performance uploading a 16gb test file to S3 (RadosGW) from a VM is terrible at only 25MB or so. Uploading directly from the PVE host OS get's me around 70MB. I recognize that throughput testing is very subjective but that seems pretty bad. I was wondering what other folks are getting. In my s3 tests I'm uploading directly to the RadosGW IP address and I'm running the gateway on one of the ceph nodes. RadosGW is not running in SSL mode. I'm planning on swapping ceph over to some 40gb stuff I've ended up with but it doesn't seem like I'm even hitting the max on a 10gb network.
Proxmox 7.3-4
x3 Dual CPU E5-2670 w/64gb of DDR3L 1333 ECC RDIMM
x9 Samsung PM983 960GB NVME SSDs
VMs and PVE are on one dedicated 10gb network and Ceph is on another, iperf3 tested at 9.8gb/s all around.
MTU 9000 on both VM and Ceph network
tested with pve-firewall on and off
I used the wiki here for enabling RadosGW https://pve.proxmox.com/wiki/User:Grin/Ceph_Object_Gateway
Proxmox 7.3-4
x3 Dual CPU E5-2670 w/64gb of DDR3L 1333 ECC RDIMM
x9 Samsung PM983 960GB NVME SSDs
VMs and PVE are on one dedicated 10gb network and Ceph is on another, iperf3 tested at 9.8gb/s all around.
MTU 9000 on both VM and Ceph network
tested with pve-firewall on and off
I used the wiki here for enabling RadosGW https://pve.proxmox.com/wiki/User:Grin/Ceph_Object_Gateway