Hi,
we setup a new environment with 3 nodes, debian stretch, proxmox 5.1, ceph luminous.
Each nodes has 4 ssds for osd, summary 12 osds.
From pg-calc we set a pg_num of 512 in ceph pool.
The network for ceph is connected via infiniband. If we install a vm in ceph storage and make a dd inside, we only get results round about 175MB-200MB/s.
rados bench also is raound about 170-200MB/s
iperf shows a bandwith round about 6Gbit/s.
Any links / hints what we can do to get a better performance with infiniband, stretch and proxmox 5.1?
Regards,
Volker
we setup a new environment with 3 nodes, debian stretch, proxmox 5.1, ceph luminous.
Each nodes has 4 ssds for osd, summary 12 osds.
From pg-calc we set a pg_num of 512 in ceph pool.
The network for ceph is connected via infiniband. If we install a vm in ceph storage and make a dd inside, we only get results round about 175MB-200MB/s.
rados bench also is raound about 170-200MB/s
iperf shows a bandwith round about 6Gbit/s.
Any links / hints what we can do to get a better performance with infiniband, stretch and proxmox 5.1?
Regards,
Volker