Test, nested Ceph cluster SSD latency up to 2000ms

brucexx

Renowned Member
Mar 19, 2015
263
9
83
I configured a nested Proxmox Ceph cluster (3 nodes) for testing on my Proxmox server. 3 nodes, plenty of RAM, CPU power etc. I used 3 good SAS SSDs , 1 per virtual machine.

Currently there is nothing else running on this Proxmox serer. All networking works fine , I have 2 x 10Gbps ports and capable 10 Gbps switch.

I added SSDs to each cluster using this instruction: https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM) and left default settings.

The cluster works fine but I am getting ceph warnings that latency between drives goes as high as 2000ms. I checked the latency between nodes and it is below 1ms (as expected).

Again this is not production environment, just strictly for testing. Wondering it anybody can suggest what to adjust ? Is this the only method of passing an SSD ? I noticed under drive manufacturer it said QEMU and not the vendor of the SSD so there might be some latency there.

Thank you