Hallo,
My hyperconverged Proxmoxcluster with Ceph (19.2.1) have 3 Servers:
All have Threadripper Pro Zen3, Zen4 16-32 Cores, 256Gb Ram, with at first 1 NVME OSD (Kioxia CM7r) per Server.
Frontend Network has multiple redundant 10Gbit NICs for VMS und Clients.
Backend Network only for Ceph 100Gbit DAC without Switch direct attached in Broadcast mode.
I have tested the CephRBD Pool with fio and get good read speeds, write is slow.
My Backup of VMs to PBS (10Gbit Link, NVME Storage) takes long (VMs are big 1Tb, 3Tb) and if it runs, the Ceph shows maximal Readspeed (in Proxmox Gui) max. 900 MB/s, which I think is the limit of 10Gbit Connection of Frontend network VMs. It makes access to VMs slow. My PBS Server shows maximal Transfer Rate of 120 Mb/s( 1Gbit) which is 10% of phisical PBS cable speed. Everything is set with MTU 9000, newest firmware.
So I plan to buy some 100Gbit Cards and Microtik Switch CRS520-4XS-16XQ-RW that can be connected throug 25Gbit Cable to 10Gbit Ubiquiti switch that we have right now for all network.
My question is:
If VMs would be running in RAM and would be connected to 100Gbit Frontend network through 100Gbit switch, so they communicate with each other with 100gbit speed right?
And if I connect the PBS to the same Switch with 10Gbit link I will get 10Gbit transfer speed, but CEPH Reads and writes would be better with frontend 100Gbit Network right?
Or ist it better to invest at first in 3 more OSDs (NVME Kioxia CM7r) to get total of 6 OSDs in Cluster and get much better read and write speeds (storage would than expand to)?
Should I first upgrade the Forntend to 100Gbit
or buy some more OSDs
to get faster Ceph read and wirtes and faster Backup to PBS?
Thanks
My hyperconverged Proxmoxcluster with Ceph (19.2.1) have 3 Servers:
All have Threadripper Pro Zen3, Zen4 16-32 Cores, 256Gb Ram, with at first 1 NVME OSD (Kioxia CM7r) per Server.
Frontend Network has multiple redundant 10Gbit NICs for VMS und Clients.
Backend Network only for Ceph 100Gbit DAC without Switch direct attached in Broadcast mode.
I have tested the CephRBD Pool with fio and get good read speeds, write is slow.
My Backup of VMs to PBS (10Gbit Link, NVME Storage) takes long (VMs are big 1Tb, 3Tb) and if it runs, the Ceph shows maximal Readspeed (in Proxmox Gui) max. 900 MB/s, which I think is the limit of 10Gbit Connection of Frontend network VMs. It makes access to VMs slow. My PBS Server shows maximal Transfer Rate of 120 Mb/s( 1Gbit) which is 10% of phisical PBS cable speed. Everything is set with MTU 9000, newest firmware.
So I plan to buy some 100Gbit Cards and Microtik Switch CRS520-4XS-16XQ-RW that can be connected throug 25Gbit Cable to 10Gbit Ubiquiti switch that we have right now for all network.
My question is:
If VMs would be running in RAM and would be connected to 100Gbit Frontend network through 100Gbit switch, so they communicate with each other with 100gbit speed right?
And if I connect the PBS to the same Switch with 10Gbit link I will get 10Gbit transfer speed, but CEPH Reads and writes would be better with frontend 100Gbit Network right?
Or ist it better to invest at first in 3 more OSDs (NVME Kioxia CM7r) to get total of 6 OSDs in Cluster and get much better read and write speeds (storage would than expand to)?
Should I first upgrade the Forntend to 100Gbit
or buy some more OSDs
to get faster Ceph read and wirtes and faster Backup to PBS?
Thanks