Poor perfomance Proxmox VE 7 + Ceph

Gotxo

Member
May 24, 2022
10
0
6
Hi,

I have a problema and no loca

My config:

My configuration is:
3 x Tiny Lenovo i5 8 series
Every tiny with 16Gb ram ( Tuesday amp to 64Gb)
1 SSD install the PVE
1 NVME 1TB (planned CEPH)
3 x LAN USB 2.5Gb ( 1 x tiny)
Switch Zyxel Multigibagit L2


My config LAN is
1GB Nic for Admin PVE
2.5GB Nic USB for VM/CT and CLUSTER/CEPH

Ip: 192.168.55.xx every admin tiny in 1Gb Nic.
IP: 10.100.100.xx every tiny in cluster and CEPH in 2.5Gb

I test perfomance disk with rado and no pass that 110Mb!!

What is the problem? all traffic discs OSD is for lan 2,5Gb the minimum transfer should be.

msedge_5VYyqsPdAu.png

Any idea??

Thx
 
The output is in MB not Mb, so not that bad. It uses 40% of your bandwidth, but your IOPS is VERY bad. What SSD and NVMe is this? Probably not suitable for CEPH.

The NVME is basics, are basic, I'm thinking of changing them for a samsung 980 pro of 1Tb

but I believe that the CEPH is not for the discs, it is for something else.
 
If you want good sync IOPS, get a NVMe that is datacenter/enterprise grade with powerloss protection.