Read/Write speed in Proxmox VE on the same host. Anyone can help me to understand?

Jul 4, 2022
64
9
13
Poland
Hello Proxmox users,

I noticed a strange thing or maybe this is expected behavior I don't know. To make it short...
I have a Proxmox VE installed on HPE Proliant DL380 Gen8, 2xCPU 2,49GHz, hardware RAID 5 with 6x Samsung Enterprise PM893 480GB SSD drives, 256GB RAM DDR4-3200.
There are only two VMs running on it.
One is clean Windows 2022 Server without any services installed yet and the other one is Windows 2019 with MS SQL 2019 working with simple databases.
Both have very similiar hardware configuration in Proxmox VE, both have harddrives configured as SCSI.
Please take a look on the screens I've attached below is there any reason these benchmarks are so different?
I've check them many times on different server loads and benchmark gives me pretty much all the time the same result.

Here is Windows 2022
1712343531133.png

And there is Windows 2019
1712343284637.png
 
post config of each vm, the output of /etc/pve/qemu-server/vmid.conf in [CODE][/CODE] tags forums, please.
 
First, CrystalDiskMark is only doing cached reads/writes. So you are basically benchmarking your RAM and not your disks. Better tool would be "fio".
Did you enable NUMA for the VMs, so a VM doesn't get slowed down because ressources need to get accessed that are on the other CPU your VM isn't running on?
Maybe the raid card is connected to PCIe lanes of CPU1 and the win2019 VM is running onb CPU 2 or something similar?
 
Last edited:
post config of each vm, the output of /etc/pve/qemu-server/vmid.conf in [CODE][/CODE] tags forums, please.

Windows 2019
Code:
agent: 1
boot: order=scsi0;ide2
cores: 8
ide2: none,media=cdrom
machine: pc-i440fx-7.2
memory: 98304
meta: creation-qemu=7.2.0,ctime=1685645435
name: xprimer.retechxl.local
net0: e1000=DA:78:80:BE:C1:A1,bridge=vmbr0,firewall=1
net1: e1000=32:5A:7F:0A:02:20,bridge=vmbr1,firewall=1
numa: 0
onboot: 1
ostype: win10
scsi0: N2-RAID-SSD:vm-101-disk-0,format=raw,iothread=1,size=128G
scsi1: N2-RAID-SSD:vm-101-disk-1,format=raw,iothread=1,size=128G
scsihw: virtio-scsi-single
smbios1: uuid=45e3d43b-12a3-45d3-8387-4cd61572830f
sockets: 2
startup: order=2
vmgenid: 46d6aade-3cdf-47d4-8413-c79eb89d80d0
Windows 2022
Code:
agent: 1
bios: ovmf
boot: order=scsi0
cores: 8
cpu: x86-64-v2-AES
efidisk0: N2-RAID-SSD:vm-100-disk-0,efitype=4m,format=raw,pre-enrolled-keys=1,size=528K
machine: pc-q35-8.1
memory: 16384
meta: creation-qemu=8.1.5,ctime=1709286845
name: cdn.retechxl.local
net0: e1000=BC:24:11:2E:C9:E7,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: win11
scsi0: N2-RAID-SSD:vm-100-disk-1,iothread=1,size=64G
scsi1: N2-RAID-SSD:vm-100-disk-2,format=raw,iothread=1,size=320G
scsihw: virtio-scsi-single
smbios1: uuid=12c73f5e-5a9c-41e0-8e51-ffbf97b99164
sockets: 2
startup: order=2
tpmstate0: N2-RAID-SSD:vm-100-disk-3,size=4M,version=v2.0
vmgenid: 73424b66-7706-414f-abf4-05a4292fc4d6
 
First, CrystalDiskMark is only doing cached reads/writes. So you are basically benchmarking your RAM and not your disks. Better tool would be "fio".
Did you enable NUMA for the VMs, so a VM doesn't get slowed down because ressources need to get accessed that are on the other CPU your VM isn't running on?
Maybe the raid card is connected to PCIe lanes of CPU1 and the win2019 VM is running onb CPU 2 or something similar?
Is there any way to enbale NUMA from GUI?
Also how do I know which machine is working on which CPU if both have configure to use both CPUs?
 
Is there any way to enbale NUMA from GUI?
yes. its in cpu options. to be clear, you're not "enabling" NUMA, you're instructing kvm to respect core/ram affinity in consideration when assigning resources to the vm.

Also how do I know which machine is working on which CPU if both have configure to use both CPUs?
simple question, complex answer. There is some discussion on this subject in the documentation here: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_virtual_machines_settings
but the most practical way to answer is
- dont subscribe more cores to a VM than are available on a single socket.
- dont subscribe more then half the ram present (on a two socket system) to a VM (and to be safe, less then half.)
- make sure the numa checkbox is checked, which should prevent cross-bus memory from being assigned.

Lastly, dont assign dual sockets to a vm. at best it does you no good; at worst, it can make a mess of core and ram pinning. Why is it an option at all then? because there are usecases when its desirable to simulate multiple sockets. this is not your usecase ;)

as for your specific case? if you have 8+ core cpus, I'm kinda stumped; if you have less then 8 cores/socket, reduce your configs and try again. might also want to check the version of the virtio drivers on each.
 
  • Like
Reactions: Dunuin
isn't the windows 2019 .conf missing the cpu line ?

edit: + 2022 .conf haven't format=raw as C disk
 
Last edited:
if 2019 is an Active Directory Domain Controller , Windows Write Cache for C: is disabled.
 
  • Like
Reactions: Dunuin
What versions of Virtio drivers are being used within both VMs? My hunch is that your different read/write speeds, stems from differences within the VMs themselves, not the HV. Maybe try spinning up a "brand new" "Windows 2019"; latest drivers etc. & see what you get.

edit: + 2022 .conf haven't format=raw as C disk
I believe anyway the format-type will be format=raw as per default. The OP could check this on the N2-RAID-SSD Storage itself.

The biggest difference (from HV perspective) between VMs is the BIOS & Machine types. But unlikely the cause.

isn't the windows 2019 .conf missing the cpu line ?
This would suggest that the Windows 2019 VM was created in an older PVE version. As above maybe the OP should try creating a new Windows 2019 VM & test.
 
  • Like
Reactions: itret
Lastly, dont assign dual sockets to a vm. at best it does you no good; at worst, it can make a mess of core and ram pinning. Why is it an option at all then? because there are usecases when its desirable to simulate multiple sockets. this is not your usecase ;)

In the guide it states "If the NUMA option is used, it is recommended to set the number of sockets to the number of nodes of the host system" I am confused I thought this meant set the Vm's to 2 sockets if you have 2 sockets on the motherboard ?
 
In the guide it states "If the NUMA option is used, it is recommended to set the number of sockets to the number of nodes of the host system" I am confused I thought this meant set the Vm's to 2 sockets if you have 2 sockets on the motherboard ?
true, IF you are assigning more cores then a single socket's worth. which I addressed in my admonition. In practice, if you intend to have vms of such size without any HA, you're probably better off just putting them on metal.
 
  • Like
Reactions: toomanylogins
Ok for best performance for a windows 10 VM (my workstation) on a 2 socket * 8 core cpu's motherboard. I enable numa and allocate 4 cores and 2 sockets or 1 socket and 8 cores ? Its a home lab with just me as the user. The server is lightly used as I have a bunch of VM's which I use the testing. It is not really under any real load.
Thanks
 
Ok for best performance for a windows 10 VM (my workstation) on a 2 socket * 8 core cpu's motherboard.
Its a home lab with just me as the user. The server is lightly used
Honestly, if that is your use case, why are you wasting so much effort on it? single socket, 4 cores, just the necessary minimum of ram (in an ideal world you'd enable ballooning but this has it own problems.)
 
What is you Proxmox VE Base Filesystem and do you use a second Filesystem on top the Proxmox VE Base Filesystem?
Are the KVM driver installed?
 
What versions of Virtio drivers are being used within both VMs? My hunch is that your different read/write speeds, stems from differences within the VMs themselves, not the HV. Maybe try spinning up a "brand new" "Windows 2019"; latest drivers etc. & see what you get.


I believe anyway the format-type will be format=raw as per default. The OP could check this on the N2-RAID-SSD Storage itself.

The biggest difference (from HV perspective) between VMs is the BIOS & Machine types. But unlikely the cause.


This would suggest that the Windows 2019 VM was created in an older PVE version. As above maybe the OP should try creating a new Windows 2019 VM & test.
That may be an issue, indeed on one machine I have the lastest drivers on the other one 215.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!