Issues on Windows server guest disk speed

a.davanzo

Member
Nov 18, 2020
31
2
13
42
Hello
i've a very strange issues.

I've try different configuration but the result was always the same.
Mu HW and configuratin:
6 node Cluster
each node:
HPE DL360 Gen10 dual Xeon 42100
384GB of Ram
2 x 40gbit port
OS installad on a radi1:
2x Samsung PM1643a 960GB 2.5" SSD SAS 12G DWPD 1 MZILT960HBHQ-00007

For the data we use CEPH
on each server we have 1 or 2
Samsung PM9A3 7.68TB 2.5" U.2 SSD PCIe 4.0 x4 MZQL27T6HBLA-00W07 DWPD 1
for a total of 6 ore 12 disk for each cluster.

On one cluster we have try to use:
1 x Samsung PM1735 12.800GB HHHL SSD PCIe 4.0 x8 DWPD 3 MZPLJ12THALA-00007
for each Node and create a CEPH with this nvme disk.

this is the hardware configuration, basically.

The issues was on windows.

No balooning
numa
virtio scsi single
disk with:
discard
IO thread
ssd emulation
Async IO: native
no drive cach.

2 disk 40gb same configuration.

if i perform
winsat disk -drive C
i get this result:
1722975578933.png

on drive E:
1722975611376.png


I get the same result also if i've 80gb of disk with 2 partition of 40gb
the OS partition is SLOW then the other partition.

this is the best configuration that we have found for the best poerformance.
why the OS disk have this worst perf?
we have this issues from PVE 7 to lastes version with PVE 8.2.4
virtio driver and qemu from 248
fresh OS installation
or from our template.
same result also from a VM migrated from vmware where there is no this big differecnt.
anyone have never see this difference?
any help is appreciated.
Thanks.
 
Last edited:
  • Like
Reactions: _gabriel
No ad role
Installed it’s a clean setup
Cache are at risk level why same disk different partition have different performance (disk is the same)
I never check the disk cache on OS
But Is possible that was disabled for the os and enabled for the other disk by default?
 
But Is possible that was disabled for the os and enabled for the other disk by default?
yes, check device manager.
edit: are disks together same type (because virtio scsi is different from virtio block) ?
 
Last edited:
Typically this occurs when a role is active which deactivates the write cache. The best known for this is AD, but there are also other roles that cause this. Apparently the impact is more noticeable with KVM/qemu than with VMware.

Why does nobody notice this? Because probably nobody seriously benchmarks the system disk.

User data, databases and also benchmarks are normally always done on extra disks.
 
  • Like
Reactions: _gabriel
Hello
no extra role installed.
fresh installation, only driver and other.
ok about cache but i don't understanrd...
the cache is on the disk.
the same disk with 2 partition
os and data
why OS partition is slow then data partition?

the disk is the same with 2 partitio and the cache is enable at disk level not partition level.
Anyway this is the conf:
OS:
1723020094528.png

Additional disk

1723020056337.png

Role installed:
1723020153927.png
1723020166677.png
1723020184969.png
1723020203832.png
1723020215081.png



Speed result:
1723020285266.png

1723020304730.png



cat /etc/pve/qemu-server/901.conf
agent: 1,freeze-fs-on-backup=0,fstrim_cloned_disks=1
balloon: 0
boot: order=scsi0;sata0
cores: 4
cpu: Cascadelake-Server-noTSX
hotplug: disk,network,memory
machine: pc-i440fx-8.2
memory: 4096
meta: creation-qemu=7.1.0,ctime=1669384861
name: WIN2022-TMPL
net0: virtio=CA:38:3B:CA:28:80,bridge=vmbr0,tag=209
numa: 1
ostype: win11
sata0: none,media=cdrom
scsi0: VM-DISK:vm-901-disk-0,aio=native,discard=on,iothread=1,size=40G,ssd=1
scsi1: VM-DISK:vm-901-disk-1,aio=native,discard=on,iothread=1,size=40G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=70889aa8-632c-40b0-836f-5f8d1ba61b8f
sockets: 1
vmgenid: 1f86a454-d724-4a6e-8dc2-bd413168f2e2
 
If the performance is different on different partitions of the same disk, then it is not a Proxmox problem but a Windows problem.
 
reproduced here, nice catch (expected ?)
on Win10 & Win2022 (virt scsi / io thread / aio=default io_uring / cache=default none)

winsat disk -drive C (system disk)
Code:
> Disk  Random 16.0 Read                       276.28 MB/s          8.0
> Disk  Sequential 64.0 Read                   3091.58 MB/s          9.2
> Disk  Sequential 64.0 Write                  599.96 MB/s          8.2
> Temps de lecture moyen avec des écritures séquentielles0.274 ms          8.5
> Latence : 95e centile                        1.870 ms          7.8
> Latence : Maximum                            4.046 ms          8.6
> Temps de lecture moyen avec des écritures aléatoires0.493 ms          8.7
> Durée d?exécution totale 00:00:04.53

winsat disk -drive D (part2 on system disk)
Code:
> Disk  Random 16.0 Read                       1686.09 MB/s          9.2
> Disk  Sequential 64.0 Read                   6224.73 MB/s          9.9
> Disk  Sequential 64.0 Write                  600.60 MB/s          8.2
> Temps de lecture moyen avec des écritures séquentielles0.300 ms          8.4
> Latence : 95e centile                        1.694 ms          7.9
> Latence : Maximum                            16.066 ms          7.9
> Temps de lecture moyen avec des écritures aléatoires0.448 ms          8.7
> Durée d?exécution totale 00:00:03.19
 
If the performance is different on different partitions of the same disk, then it is not a Proxmox problem but a Windows problem.
Thanks Falk
but only with proxmox?
you have the same issues on your enviroment?
with vmware i don't have this difference.


what kind of problem?

what do you thinkg about that?
from my side is very strange.
 
I think this is a Driver issue, but no one interested on High Performance OS Disks.

Have you comparing results from vSphere on the same Hardware?
Can you try Virtio 0.208 drivers?
 
same with virtio scsi driver version 0.1.208.
same with virtio BLK.
Slow winsat Disk Random 16.0 Read result is only on Windows system boot partition. (need tests with fio & CDM).
Another partition in same virtual disk get better results, like an additional virtual disk.
vDisk as SATA has same slow results for C and the another partition (expected as SATA is slower than virtio).
Tried with VMware PVSCSI, same slow results for C and the another partition (little better than SATA).

edit: will test with Ubuntu
 
Last edited:
Hi
the hardware was the same.
we are converting our vmware server to proxmox.
the differenct was the Storage.
on vmware we have a cluster with 4 node.
for the storage we use MSA2040/2050 SAS, direct connect to each node of the cluster.
1 MSA per each cluster.

with proxmox we cannot use the MSA 2040/2050 and use CEPH.
we have take the disk from MSA and inser in the server to create a CEPH storage.
but the controller, hardware server is the same.
you suggest to downgrade to 208 driver? from our 248?


About this:
but no one interested on High Performance OS Disks.

Is not correct.
many costumers use the same disk for install all sosftware.
on the OS disk there is also the paging
if ypou use MSSQL server or other DB server on the same disk of OS you see the bad performance.

also if you have the DB on the secondary disk, yo uhave always a worst performance, for the moment we move this kind of server on our cluster with nvme disk, where the issues persist but due to the disk is not worst like here with SSD
 
Last edited:
same with virtio scsi driver version 0.1.208.
same with virtio BLK.
Slow winsat Disk Random 16.0 Read result is only on Windows system boot partition. (need tests with fio & CDM).
Another partition in same virtual disk get better results, like an additional virtual disk.
vDisk as SATA has same slow results for C and the another partition (expected as SATA is slower than virtio).
Tried with VMware PVSCSI, same slow results for C and the another partition (little better than SATA).

edit: will test with Ubuntu
bi ussyes with Linux OS
UB18-22
centos, etc etc
all right perf.
only windows
 
with proxmox we cannot use the MSA 2040/2050 and use CEPH.
Indeed you could use pve with a msa20"x"0 but you would avoid use of zfs onto as that are external hw-raids but anyway would be cool to have the msa connected to 4 pve's to serve the volumes by nfs to the 6 pve's. So in any hw error you would be able to run your vm/lxc environment further.
 
Indeed you could use pve with a msa20"x"0 but you would avoid use of zfs onto as that are external hw-raids but anyway would be cool to have the msa connected to 4 pve's to serve the volumes by nfs to the 6 pve's. So in any hw error you would be able to run your vm/lxc environment further.
My MSA is SAS not SAN
i can expose only the disk and need to be formatted from PVE.
I've alredy check this possibility but i lost some functionality taht i've with ceph.
shared storage, if i have shared storage i don't have snapshost, or thin provisioning, or somthing other.
if i cannot do a snapshot i cannot do a backup etc etc
i'm happy if i can continue to use my MSA ha storage but i cannot use that ...
i cannot replicate the vmware enviroment as is for PVE ...
 
Is not correct.
many costumers use the same disk for install all sosftware.
on the OS disk there is also the paging
You never do anything like that.
if ypou use MSSQL server or other DB server on the same disk of OS you see the bad performance.

also if you have the DB on the secondary disk, yo uhave always a worst performance, for the moment we move this kind of server on our cluster with nvme disk, where the issues persist but due to the disk is not worst like here with SSD
This sounds more like a general performance problem.
If the performance is not good despite NVMe, this is usually due to the network.
With NVMe and Ceph you should have at least 2x25 GBit or better 100 GBit.
Since you are still using your ESXi hardware, have you upgraded the network?
 
my network was 2x 40gbit
for each server.

but is not network problem, because the other partition or disk
with same configuration perform more then 2times ...

the difference is not from 10 to 20 ...
is from 200 to 1000 ... on random
e from 2000 to 4000 from sequential ...
 
Last edited:
indeed with winsat.
have you tried with fio ?
impact seems smaller with CDM.
fio for windows?
do you have a link for download?
and wich command do you have used?
only for do the same test.

winsat disk is a good tools from windows it's very realistic that i see also in past.

with fio under linux we don't have this issues.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!