Hello,
Today i tested a windows 2022 vm performance on Proxmox 8.2.2 Ceph 18.2.2 using CrystalDiskInfo
https://crystalmark.info/en/software/crystaldiskinfo/
Total 7 Ceph nodes.
Spec per node.
1 x 1.92TB PM863a SATA 6.0 Gbps 1.3 DWPD
9 OSD x 8TB Seagate ST8000VN0022 IronWolf NAS SATA 6Gb/s which each uses 96 GiB of the 1.92TB PM863a as WAL/DB device
Each compute and Ceph nodes have 4x10G with 2x10G for public and 2x10g bonding layer3+4 - LACP to a pair of Huawei stacked cloudengine switches for cluster network
MTU size 1500
This is the result of the crystalmark on the Windows 2022 while 2 OSD were active+remapped+backfilling
Today i tested a windows 2022 vm performance on Proxmox 8.2.2 Ceph 18.2.2 using CrystalDiskInfo
https://crystalmark.info/en/software/crystaldiskinfo/
Total 7 Ceph nodes.
Spec per node.
1 x 1.92TB PM863a SATA 6.0 Gbps 1.3 DWPD
9 OSD x 8TB Seagate ST8000VN0022 IronWolf NAS SATA 6Gb/s which each uses 96 GiB of the 1.92TB PM863a as WAL/DB device
Each compute and Ceph nodes have 4x10G with 2x10G for public and 2x10g bonding layer3+4 - LACP to a pair of Huawei stacked cloudengine switches for cluster network
MTU size 1500
This is the result of the crystalmark on the Windows 2022 while 2 OSD were active+remapped+backfilling