Hello,
We have a seperate Ceph cluster and Proxmox Cluster (seperate server nodes), i want to know if the performance we get is normal or not, my thought was that the performance could be way better with the hardware we are using.
So is there any way we can improve with configuration changes?
The performance we get from inside Virtual Machines are about:
Sequentieel: Read 642,6 MB/s Write 459,8MB/s
4 kb Single Thread: Read 4,342 MB/s Write 15,45 MB/s
4K i only get 406 IOPs Write and 835 IOPs Read.
The hardware we are using:
4 X OSD Node, per node:
- 96GB Ram
- 2 x 6 Core (with HT) 2,6 GHz
- 6 x SM863 960GB (single BlueStore OSD per SSD)
- 2 x 10GB SFP+ (1 x 10GB for storage and 1 x 10GB for replication)
3 x Monitoring Node, per node:
- 4GB RAM
- Dual Core CPU (with HT)
- Single 120 GB Intel Enterprise SSD
- 2 x 1 GB Network (Active/Backup)
Replication/size: 2
Ceph Version: 12.2.8
Jumbo Frames enabled
Logging options from Ceph disabled in ceph.conf (this improves it a little bit)
All Proxmox nodes are connected with 1 x 10GB SFP+
Is there any configuration / setting we can change to improve performance? Or is this max we can get with this hardware? Especially 4K read / writes are slow.
I also be thinking if it would help to add 2 OSD nodes with each a fast NVME SSD and make it a Cache pool before the normale SSD pool? Ore will this make it even slower?
Thank you already,
Kind regards,
Sander
We have a seperate Ceph cluster and Proxmox Cluster (seperate server nodes), i want to know if the performance we get is normal or not, my thought was that the performance could be way better with the hardware we are using.
So is there any way we can improve with configuration changes?
The performance we get from inside Virtual Machines are about:
Sequentieel: Read 642,6 MB/s Write 459,8MB/s
4 kb Single Thread: Read 4,342 MB/s Write 15,45 MB/s
4K i only get 406 IOPs Write and 835 IOPs Read.
The hardware we are using:
4 X OSD Node, per node:
- 96GB Ram
- 2 x 6 Core (with HT) 2,6 GHz
- 6 x SM863 960GB (single BlueStore OSD per SSD)
- 2 x 10GB SFP+ (1 x 10GB for storage and 1 x 10GB for replication)
3 x Monitoring Node, per node:
- 4GB RAM
- Dual Core CPU (with HT)
- Single 120 GB Intel Enterprise SSD
- 2 x 1 GB Network (Active/Backup)
Replication/size: 2
Ceph Version: 12.2.8
Jumbo Frames enabled
Logging options from Ceph disabled in ceph.conf (this improves it a little bit)
All Proxmox nodes are connected with 1 x 10GB SFP+
Is there any configuration / setting we can change to improve performance? Or is this max we can get with this hardware? Especially 4K read / writes are slow.
I also be thinking if it would help to add 2 OSD nodes with each a fast NVME SSD and make it a Cache pool before the normale SSD pool? Ore will this make it even slower?
Thank you already,
Kind regards,
Sander