Hello
i've a very strange issues.
I've try different configuration but the result was always the same.
Mu HW and configuratin:
6 node Cluster
each node:
HPE DL360 Gen10 dual Xeon 42100
384GB of Ram
2 x 40gbit port
OS installad on a radi1:
2x Samsung PM1643a 960GB 2.5" SSD SAS 12G DWPD 1 MZILT960HBHQ-00007
For the data we use CEPH
on each server we have 1 or 2
Samsung PM9A3 7.68TB 2.5" U.2 SSD PCIe 4.0 x4 MZQL27T6HBLA-00W07 DWPD 1
for a total of 6 ore 12 disk for each cluster.
On one cluster we have try to use:
1 x Samsung PM1735 12.800GB HHHL SSD PCIe 4.0 x8 DWPD 3 MZPLJ12THALA-00007
for each Node and create a CEPH with this nvme disk.
this is the hardware configuration, basically.
The issues was on windows.
No balooning
numa
virtio scsi single
disk with:
discard
IO thread
ssd emulation
Async IO: native
no drive cach.
2 disk 40gb same configuration.
if i perform
winsat disk -drive C
i get this result:
on drive E:
I get the same result also if i've 80gb of disk with 2 partition of 40gb
the OS partition is SLOW then the other partition.
this is the best configuration that we have found for the best poerformance.
why the OS disk have this worst perf?
we have this issues from PVE 7 to lastes version with PVE 8.2.4
virtio driver and qemu from 248
fresh OS installation
or from our template.
same result also from a VM migrated from vmware where there is no this big differecnt.
anyone have never see this difference?
any help is appreciated.
Thanks.
i've a very strange issues.
I've try different configuration but the result was always the same.
Mu HW and configuratin:
6 node Cluster
each node:
HPE DL360 Gen10 dual Xeon 42100
384GB of Ram
2 x 40gbit port
OS installad on a radi1:
2x Samsung PM1643a 960GB 2.5" SSD SAS 12G DWPD 1 MZILT960HBHQ-00007
For the data we use CEPH
on each server we have 1 or 2
Samsung PM9A3 7.68TB 2.5" U.2 SSD PCIe 4.0 x4 MZQL27T6HBLA-00W07 DWPD 1
for a total of 6 ore 12 disk for each cluster.
On one cluster we have try to use:
1 x Samsung PM1735 12.800GB HHHL SSD PCIe 4.0 x8 DWPD 3 MZPLJ12THALA-00007
for each Node and create a CEPH with this nvme disk.
this is the hardware configuration, basically.
The issues was on windows.
No balooning
numa
virtio scsi single
disk with:
discard
IO thread
ssd emulation
Async IO: native
no drive cach.
2 disk 40gb same configuration.
if i perform
winsat disk -drive C
i get this result:
on drive E:
I get the same result also if i've 80gb of disk with 2 partition of 40gb
the OS partition is SLOW then the other partition.
this is the best configuration that we have found for the best poerformance.
why the OS disk have this worst perf?
we have this issues from PVE 7 to lastes version with PVE 8.2.4
virtio driver and qemu from 248
fresh OS installation
or from our template.
same result also from a VM migrated from vmware where there is no this big differecnt.
anyone have never see this difference?
any help is appreciated.
Thanks.
Last edited: