Windows 2022 Laggy (Terminal server and SQL server )

TheSover

Member
Jun 12, 2023
32
2
8
We are passing our cloud infrastructure from ESXI to Proxmox, we have found that the windows server 2022 machines are very slow, they are used for Terminal Server and they use their software which consumes SQL server, I have looked to activate the "disable" checks and "SSD Emulation" of the disk, which is mounted in scsi, (the disks of the physical machine are SSD), and we have the cpu in kvm64, I don't know what else we can modify to improve performance, could you help me?

agent: 1
boot: order=scsi0;net0;ide2;ide0
cores: 10
ide0: none,media=cdrom
ide2: none,media=cdrom
machine: pc-q35-7.1
memory: 8196
meta: creation-qemu=7.1.0,ctime=1678202321
name: XXXXXXX
net0: virtio=36:EE:81:30:A3:85,bridge=vmbr0,firewall=1,tag=2100
net1: e1000=06:A3:43:11:9F:2D,bridge=vmbr0,firewall=1,tag=1743
numa: 0
onboot: 1
ostype: win11
scsi0: Raid:vm-162-disk-0,discard=on,iothread=1,size=50G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=990f6244-9b03-4fd3-8bda-11e788db3692
sockets: 1
tpmstate0: Raid:vm-162-disk-1,size=4M,version=v2.0
vmgenid: a7ff26f8-f406-4217-8129-505e73b93db1
 
I have looked to activate the "disable" checks and "SSD Emulation" of the disk, which is mounted in scsi, (the disks of the physical machine are SSD), and we have the cpu in kvm64, I don't know what else we can modify to improve performance, could you help me?
There are many different SSD interfaces, Windows and Terminal services in particular are very sensitive to latency. You want to make sure you use best practices for your storage: https://kb.blockbridge.com/technote/proxmox-tuning-low-latency-storage/
I see that your storage object is named "raid" - I suspect thats a hardware raid, and if so many of those are not great performers.

Additionally "kvm64" is almost guaranteed to be a subpar choice for high performance expectations. You can read more about it in "man qm", for example:
HTML:
       CPU Type
           QEMU can emulate a number different of CPU types from 486 to the latest Xeon processors. Each new processor generation adds new features, like hardware assisted 3d rendering, random number generation, memory
           protection, etc ... Usually you should select for your VM a processor type which closely matches the CPU of the host system, as it means that the host CPU features (also called CPU flags ) will be available in your
           VMs. If you want an exact match, you can set the CPU type to host in which case the VM will have exactly the same CPU flags as your host system.

           This has a downside though. If you want to do a live migration of VMs between different hosts, your VM might end up on a new system with a different CPU type. If the CPU flags passed to the guest are missing, the
           qemu process will stop. To remedy this QEMU has also its own CPU type kvm64, that Proxmox VE uses by defaults. kvm64 is a Pentium 4 look a like CPU type, which has a reduced CPU flags set, but is guaranteed to work
           everywhere.

           In short, if you care about live migration and moving VMs between nodes, leave the kvm64 default. If you don’t care about live migration or have a homogeneous cluster where all nodes have the same CPU, set the CPU
           type to host, as in theory this will give your guests maximum performance.

Next, adding more cores does not always lead to better performance. You didnt specify what your CPU is, so it may be that you are ok with 10 cores. Here is a good example of CPU optimization analyses: https://forum.proxmox.com/threads/finding-out-the-culprit-behind-high-cpu-usage.128578/#post-562977

In short, improving performance is a complex task that requires good understanding of the Compute hardware you have, storage behind it and network.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: _gabriel
We've been trying everithing ... GPU, CPU types ... RAM off balloon, cache on disks everything... VM all of them are laggy and slow almost when the open widows from aplications or make sql requests ...

1688565503050.png
 
Are they laggy when you use them in PVE-Console or when you connect to them by RDP?
Is SQL-Server on same or dedicated VM?
 
Last edited:
I've tried CEPH in the past and it kills the Window VMs in terms of performance. So I switched them to ZFS and they're practically the same performance as ESXi with vSAN. It loads slow at first but once it starts up the reboots of the Windows VMs are almost instant and the performance are fantastic thanks to ZFS's cache in RAM.

So it really boils down to your storage sub-system.
 
I've tried CEPH in the past and it kills the Window VMs in terms of performance. So I switched them to ZFS and they're practically the same performance as ESXi with vSAN. It loads slow at first but once it starts up the reboots of the Windows VMs are almost instant and the performance are fantastic thanks to ZFS's cache in RAM.

So it really boils down to your storage sub-system.
Can not confirm this. CEPH-Performance is really good when configured right. We host hundreds of Windows-VMs on several 3-Node-Clusters. Performance is outstanding. (Confirmed by Hosting-Customers)
 
Can not confirm this. CEPH-Performance is really good when configured right. We host hundreds of Windows-VMs on several 3-Node-Clusters. Performance is outstanding. (Confirmed by Hosting-Customers)

When I was running ProxMox version 6 four years ago with CEPH it ran fine but the performance wasn't great. Now I'm sure with improvements it's alot better. I will revisit this later on as I have a second cluster running 7.4 to test it on. I will upgrade to 8 some time this year.
 
Last edited:
I've tried CEPH in the past and it kills the Window VMs in terms of performance. So I switched them to ZFS and they're practically the same performance as ESXi with vSAN. It loads slow at first but once it starts up the reboots of the Windows VMs are almost instant and the performance are fantastic thanks to ZFS's cache in RAM.

So it really boils down to your storage sub-system.
I'm using ZFS at the moment
 
What Disks are you using? Any type of raid-controller involved?
 
My provider only give me this info:
CPU:
Intel Xeon E5 2670 or equivalent
RAM:
256 GB
Disks:
2 x 1 TB SSD
 
My provider only give me this info:
CPU:
Intel Xeon E5 2670 or equivalent
RAM:
256 GB
Disks:
2 x 1 TB SSD
Well.. if the SSD have no "Power Loss Protection".... forget it.... in ZFS PLP is a must, else it will never perform well.... As Windows writes on the disks for every nanometer you move the mouse, it will be laggy....

So you need to get info about the SSD... you can get that information from command line, when you have access.
Install for example hwinfo "apt install hwinfo" and run it, to get more information about the hardware an SSDs.
 
This is the result of hwinfo:
115: IDE 100.0: 10600 Disk
[Created at block.245]
Unique ID: WZeP.XM4v2dt78K5
Parent ID: w7Y8.6uBAc1DzZOE
SysFS ID: /class/block/sdb
SysFS BusID: 1:0:0:0
SysFS Device Link: /devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0
Hardware Class: disk
Model: "WDC PC SA530 SDA"
Vendor: "WDC"
Device: "PC SA530 SDA"
Revision: "3000"
Serial ID: "2025BU445314"
Driver: "ahci", "sd"
Driver Modules: "ahci"
Device File: /dev/sdb
Device Files: /dev/sdb, /dev/disk/by-path/pci-0000:00:1f.2-ata-2.0, /dev/disk/by-path/pci-0000:00:1f.2-ata-2, /dev/disk/by-id/ata-WDC_PC_SA530_SDASB8Y1T00_2025BU445314, /dev/disk/by-id/wwn-0x5001b444a77323e6
Device Number: block 8:16-8:31
Geometry (Logical): CHS 124519/255/63
Size: 2000409264 sectors a 512 bytes
Capacity: 953 GB (1024209543168 bytes)
Config Status: cfg=new, avail=yes, need=no, active=unknown
Attached to: #4 (SATA controller)


133: IDE 00.0: 10600 Disk
[Created at block.245]
Unique ID: 3OOL.7+MzihWEmdC
Parent ID: w7Y8.6uBAc1DzZOE
SysFS ID: /class/block/sda
SysFS BusID: 0:0:0:0
SysFS Device Link: /devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0
Hardware Class: disk
Model: "WDC PC SA530 SDA"
Vendor: "WDC"
Device: "PC SA530 SDA"
Revision: "3000"
Serial ID: "2025BU441505"
Driver: "ahci", "sd"
Driver Modules: "ahci"
Device File: /dev/sda
Device Files: /dev/sda, /dev/disk/by-id/ata-WDC_PC_SA530_SDASB8Y1T00_2025BU441505, /dev/disk/by-path/pci-0000:00:1f.2-ata-1.0, /dev/disk/by-id/wwn-0x5001b444a7732337, /dev/disk/by-path/pci-0000:00:1f.2-ata-1
Device Number: block 8:0-8:15
Geometry (Logical): CHS 124519/255/63
Size: 2000409264 sectors a 512 bytes
Capacity: 953 GB (1024209543168 bytes)
Config Status: cfg=new, avail=yes, need=no, active=unknown
Attached to: #4 (SATA controller)
 
Can't see any information about PLP on the manufacturers website. So expect terrible (not much better than a HDD) sync write performance.
 
Last edited:
  • Like
Reactions: itNGO
This is the result of hwinfo:
115: IDE 100.0: 10600 Disk
[Created at block.245]
Unique ID: WZeP.XM4v2dt78K5
Parent ID: w7Y8.6uBAc1DzZOE
SysFS ID: /class/block/sdb
SysFS BusID: 1:0:0:0
SysFS Device Link: /devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0
Hardware Class: disk
Model: "WDC PC SA530 SDA"
Vendor: "WDC"
Device: "PC SA530 SDA"
Revision: "3000"
Serial ID: "2025BU445314"
Driver: "ahci", "sd"
Driver Modules: "ahci"
Device File: /dev/sdb
Device Files: /dev/sdb, /dev/disk/by-path/pci-0000:00:1f.2-ata-2.0, /dev/disk/by-path/pci-0000:00:1f.2-ata-2, /dev/disk/by-id/ata-WDC_PC_SA530_SDASB8Y1T00_2025BU445314, /dev/disk/by-id/wwn-0x5001b444a77323e6
Device Number: block 8:16-8:31
Geometry (Logical): CHS 124519/255/63
Size: 2000409264 sectors a 512 bytes
Capacity: 953 GB (1024209543168 bytes)
Config Status: cfg=new, avail=yes, need=no, active=unknown
Attached to: #4 (SATA controller)


133: IDE 00.0: 10600 Disk
[Created at block.245]
Unique ID: 3OOL.7+MzihWEmdC
Parent ID: w7Y8.6uBAc1DzZOE
SysFS ID: /class/block/sda
SysFS BusID: 0:0:0:0
SysFS Device Link: /devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0
Hardware Class: disk
Model: "WDC PC SA530 SDA"
Vendor: "WDC"
Device: "PC SA530 SDA"
Revision: "3000"
Serial ID: "2025BU441505"
Driver: "ahci", "sd"
Driver Modules: "ahci"
Device File: /dev/sda
Device Files: /dev/sda, /dev/disk/by-id/ata-WDC_PC_SA530_SDASB8Y1T00_2025BU441505, /dev/disk/by-path/pci-0000:00:1f.2-ata-1.0, /dev/disk/by-id/wwn-0x5001b444a7732337, /dev/disk/by-path/pci-0000:00:1f.2-ata-1
Device Number: block 8:0-8:15
Geometry (Logical): CHS 124519/255/63
Size: 2000409264 sectors a 512 bytes
Capacity: 953 GB (1024209543168 bytes)
Config Status: cfg=new, avail=yes, need=no, active=unknown
Attached to: #4 (SATA controller)
Don't use ZFS with these SSDs... it will never perform well.... These are consumer 50$ Disks.... they are by no way usable for any server, except long time archive....

I am sorry... but I think this is not gonna work... maybe you can create an LVM and use the second as backup to get at least some sort of performance....
 
Make in shell: hdparm -Tt /dev/sda then /dev/sdb - i'm just courioous what hdparm tells.
That's the result:
/dev/sda:
Timing cached reads: 11356 MB in 1.99 seconds = 5694.45 MB/sec
Timing buffered disk reads: 806 MB in 3.00 seconds = 268.24 MB/sec
/dev/sdb:
Timing cached reads: 17178 MB in 1.99 seconds = 8625.80 MB/sec
Timing buffered disk reads: 740 MB in 3.01 seconds = 246.09 MB/sec
 
Problems are usually the writes not the reads. Even the crappiest Consumer QLC SSD got some decent read performance.

More useful would be to use fio to do some sync write benchmarks with 4k and 1M block size to a fresh zvol on that pool.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!