VE 8.4.0 - SQL2022 VM - IO sluggish, RDP connection very laggy

Aimovoria

New Member
Oct 6, 2025
4
0
1
Hi,

At first thank You very much for any help that I could receive cause I'm completely new towards Proxmox VE world.
We bought server with those specs:
Code:
DELL R730 8x2,5''
2x E5-2699 v3 18x 2.3GHz
128GB DDR4 ECC
H730P Mini SAS 12Gbit/s 2GB
RAID 10 (hardware set) over
 - 6x 3.84TB SAS SSD HGST/WD HUSTR7638ASS20X 12Gb
RAID 1 (hardware set) over
 - 2x Intel D3-S4510 480GB for virtualization purposes
Dell Intel X520-I350 2x 10Gbit SFP+ and 2x 1Gbit RJ45

RJ45 is for iDRAC, SFP+ plugged in to D-LINK DGS-1510-52X over fiber 2x DAC 10GbE
Server plugged in to UPS 3000VA (polish brand so I won't be advertising unneccesary :P)

iDRAC picture with hardware set for RAID-10raid10.png
CPU configuration:
cpuset.png


I have successfully migrated few Linux-VM from HyperV to Proxmox (had problems with old CentOS 5.9 VM with Asterisk and Elastix GUI for VoIP/PABX because of VHD file - but fortunately after many posts over here [many thanks] did it. Here already small question if for let's say Ubuntu where's WireGuard let's say those specs are properly set?
UbuntuConfig.png


But to main topic, I'm having huge problems with IO Speed for DB Queries over two 2022 STD VM's and very sluggish RDP over them. What I did was:
- setting up VM with those values from picture so small description
VM q35 9.2 / 32GB / 1s 10cores with NUMA enabled and cpu type: host option / UEFI / VirtIO-GPU / VirtIO SCS Single with values for scsiX as from picture belowvmoptima.png
cpuspecs.jpg

- installed virtio-win-0.1.271 package for both VM's

murzadzen_info.png

Did some tests with CrystalDiskMark, here "on paper" seems fine but probably it's mostly because of scsiX option for Write Back, correct?
On network transfer between two VM's tested with iperf3 and between proxmox VM and HyperV VM seems everything alright.
cdiskmark_iperf3.png

I tried multiple options over CPU/SCSI selection etc., I admit didn't try setting asynchronic IO threads instead of io_uring - question if this will work?
Tried two different virtio drivers (older ones just a bit), tried messing around with caching in scsiX drives, and still performance over IO is not good..

Over this example I have old workstation where this Proxmox machine was meant to swap it with other VM's but I tested IO over SQL and old specs. are dramatically better..
Specs here:
oldOptima.png


And here SQL queries perfom with testing from old machine:
oldWorkstation.png

Here is info from new proxmox set machine.. When I saw this my knees collapsed..:
newProxmoxVM.png

Apart from that I have huge RDP lag problems what I tried was:
- blocking via internal Firewall Remote Desktop UDP protocol,
- turning off option in gpedit.msc related to RDS Connection Client ("Turn off UDP on client")
- and in RDS Host for Connections setting up "Select RDP transport protocol" only for usage of TCP
And still after all of those things RDP is very unresponding.

Right now I'm on verge of crying cause we bought something and I'm no way near to seeing this as "better" option. It seems way slower.. Please help, I will try everything to make it better, since I already migrated this as working environment.

Thank You, with best regards,
Szymon.

Have a nice day/evening.
 
Hey hi,

which filesystem did you choose? Did you choose ZFS or Cephs? Then you cannot use Hardware-Raid of your server but need to switch them to HBA-mode.

Regards,
Andreas
 
Hi AndreasS,

I'm not using Cephs since it's asking to install. And server was already shipped preconfigured with asking to using hardware RAID 10 and preconfigured installation of Proxmox. I found those two commands: "df PHT /" and "lsblk" posting picture over here. Says type ext4 for /dev/mapper/pve-root that's a huge problem??

host_info.png|


EDIT: 07:22
Adding also information about this.
host_addinfo.png
 
Last edited:
Hi,

Seems like it's not set "#EnergyPerformanceBias=BalancedPerformance"

idrac_energyperf.png

Thermal profile is set to "Default Thermal Profile Settings"
thermalprofile_set.png

About mitigation just to clarify I have to edit that record in :
- /etc/default/grub and set "GRUB_CMDLINE_LINUX_DEFAULT="quiet mitigations=off"
- or it should be in /etc/kernel/cmdline ?
BTW: Only those settings or it should look differently? Right now it looks like this.
cpu_mitigation.png

Also my question about mitigation, do these flags on procesor which I set (could be making also those problems?)
qmconfigOptima.png

Found also thread in community about poor performance when CPU type is host:
https://forum.proxmox.com/threads/the-reasons-for-poor-performance-of-windows-when-the-cpu-type-is-host.163114
 
Last edited:
I'm bumping up this problem. As You mentioned _gabriel in the weekend I'll try changing to Maximum Performance in BIOS settings and settings mitigations=off in grub, already tried with x86-64-v2-aes/x86-64-v2/x86-64-3 and didn't help at all.

Is there any option that You or someone see and could be potential problem?
That could be because it's ext4 and hardware RAID 10 instead of placing this into ZFS ??
Cause right now, those IOPS values are tragic..
powershell_iops.png


Here is the code to test for yourself:

Code:
# test config
$TestFile = "$env:TEMP\iops_test.tmp"
$BlockSize = 4KB
$Iterations = 1000
$Buffer = New-Object byte[] (4KB)

# create text file
Set-Content -Path $TestFile -Value ($Buffer * ($Iterations / 10)) -Encoding Byte

# IOPS measurement (random write)
$WriteTime = Measure-Command {
    for ($i = 0; $i -lt $Iterations; $i++) {
        $fs = [System.IO.File]::OpenWrite($TestFile)
        $fs.Seek((Get-Random -Minimum 0 -Maximum ($fs.Length - $Buffer.Length)), 'Begin') | Out-Null
        $fs.Write($Buffer, 0, $Buffer.Length)
        $fs.Close()
    }
}

# PIOPS measurement (random read)
$ReadTime = Measure-Command {
    for ($i = 0; $i -lt $Iterations; $i++) {
        $fs = [System.IO.File]::OpenRead($TestFile)
        $fs.Seek((Get-Random -Minimum 0 -Maximum ($fs.Length - $Buffer.Length)), 'Begin') | Out-Null
        $fs.Read($Buffer, 0, $Buffer.Length) | Out-Null
        $fs.Close()
    }
}

# calc
$WriteIOPS = [math]::Round($Iterations / $WriteTime.TotalSeconds, 2)
$ReadIOPS = [math]::Round($Iterations / $ReadTime.TotalSeconds, 2)

# result
Write-Host "IOPS (Zapis): $WriteIOPS"
Write-Host "IOPS (Odczyt): $ReadIOPS"

# cleanup of text file created before
Remove-Item $TestFile -Force

Please help..
 
Hi,

Dumb question but, have you tested the IOPS on the host before testing it inside the VMs ?

I tested my OVH server (ZFS in Mirror with Samsung MZQLB1T9HAJR-00007) with this command :
Code:
fio --name=iops-test     --filename=/root/iops_test.tmp     --size=1G     --bs=4k     --rw=randrw     --rwmixread=70     --ioengine=libaio     --numjobs=1     --runtime=30s     --time_based     --group_reporting     --direct=1     --output=/root/fio_report.json     --output-format=json
Which result is :
READ : "iops_min" : 33519, "iops_max" : 81346, "iops_mean" : 51025.068966
WRITE : "iops_min" : 14326, "iops_max" : 35336, "iops_mean" : 21888.189655
Which seems to be "shy" compared to the data sheet I have found with the drive inside the server because READ should be "Up to 540K IOPS" and WRITE should be "Up to 50K IOPS", but the data sheet doesn't specify if the test was done under mixed workload or not [Data I/O Speed (4KB data size, Sustained)].

Here go to the test with 100% Write :
Code:
fio --name=iops-write \
    --filename=/root/iops_test.tmp \
    --size=1G \
    --bs=4k \
    --rw=randwrite \
    --ioengine=libaio \
    --numjobs=1 \
    --runtime=30s \
    --time_based \
    --group_reporting \
    --direct=1 \
    --output=/root/fio_write.json \
    --output-format=json
WRITE : "iops_min" : 35976, "iops_max" : 74288, "iops_mean" : 50287.355932

Here go the test with 100% Read:
Code:
fio --name=iops-read \
    --filename=/root/iops_test.tmp \
    --size=1G \
    --bs=4k \
    --rw=randread \
    --ioengine=libaio \
    --numjobs=1 \
    --runtime=30s \
    --time_based \
    --group_reporting \
    --direct=1 \
    --output=/root/fio_read.json \
    --output-format=json
READ :"iops_min" : 56312, "iops_max" : 138340, "iops_mean" : 72466.847458
So now the Write match the data sheet, for the read I think it's due to ZFS (but for my workload is fine).

And remember this is at the Host level (without modification to grub or other modifications to try to improve speed).

Now if I try inside my Windows Server 2025 :
Code:
fio --name=iops-write `
    --filename=C:\temp\iops_test.tmp `
    --size=1G `
    --bs=4k `
    --rw=randwrite `
    --ioengine=windowsaio `
    --numjobs=1 `
    --runtime=30s `
    --time_based `
    --group_reporting `
    --direct=1 `
    --thread `
    --output=C:\temp\fio_write.json `
    --output-format=json
WRITE : "iops_min" : 5, "iops_max" : 12818, "iops_mean" : 10618.237288
Code:
fio --name=iops-read `
    --filename=C:\temp\iops_test.tmp `
    --size=1G `
    --bs=4k `
    --rw=randread `
    --ioengine=windowsaio `
    --numjobs=1 `
    --runtime=30s `
    --time_based `
    --group_reporting `
    --direct=1 `
    --thread `
    --output=C:\temp\fio_read.json `
    --output-format=json
READ : "iops_min" : 6508, "iops_max" : 13851, "iops_mean" : 12558.633333

Config of the VM running on proxmox-ve: 8.4.0 (running kernel: 6.8.12-15-pve) / pve-manager: 8.4.14 (running version: 8.4.14/b502d23c55afcba1)
:
Code:
agent: 1
balloon: 0
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 4
cpu: x86-64-v2-AES
efidisk0: local-zfs:vm-102-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
machine: pc-q35-9.2+pve1
memory: 8192
meta: creation-qemu=9.2.0,ctime=1759941427
name: WS-2025
numa: 0
ostype: win11
scsi0: local-zfs:vm-102-disk-1,cache=writeback,discard=on,iothread=1,size=100G,ssd=1
scsihw: virtio-scsi-single
sockets: 1
tpmstate0: local-zfs:vm-102-disk-2,size=4M,version=v2.0

I hope this can help you troubleshoot your issue, and you should use fio or other tools to test disk/iops rather than powershell.

Best regards,
 
I recall that with older CPUs, performance was only about 1/3 of what could be achieved with physical hardware
*regardless of the connection method

This is not the case with the latest CPUs

It might be slowing down due to vulnerability fixes or similar measures

When using the 2699v4+p1600x, the physical PC achieved 240MB/s in random 4k q1t1, but the virtual machine could only manage about 80MB/s.
 
Last edited:
That could be because it's ext4 and hardware RAID 10 instead of placing this into ZFS ??
I don't think so , ZFS writes slower because of integrity and checksum.
Can be faster in reads operations thanks to ARC caching.

those IOPS values are tragic..
I'm not sure if powershell is reliable to benchmark IOPS.
It doesn't matter, now test the same on a Windows baremetal.

EDIT: is VBS "Running" from msinfo32.exe ? because it's nested virtualization which taxes CPU even more older CPU.
 
Last edited:
  • Like
Reactions: uzumo