Windows & Linux VDI VMs feels So laggy

Hydra

New Member
Sep 28, 2024
4
0
1
Hi everyone, I'm struggling with a very laggy RDP experience on my Proxmox cloud server.

  • Host: 12 CPU, 128GB RAM, 1Gbps bandwidth.
  • Network: VMs on vmbr1 (10.10.10.0/24), using iptables on the PVE host for RDP port forwarding.
  • The Problem: RDP to both Windows and Linux VMs is slow and unresponsive (choppy mouse, delayed typing).
  • What I've tried: Virtio drivers and QEMU Guest Agent are installed and enabled on all VMs.
The connection works, but the performance is terrible. Has anyone solved a similar issue? I'm wondering if it's my iptables forwarding, the VM config (using default vga: std), or something else entirely.

Any tips would be great!
 
Please check which type of bus the hard disk is using in the hardware.
If it is set to IDE, even after installing VirtIO and QEMU Guest tools, the Disk I/O bandwidth itself will remain low.

If it is VirtIO or SATA, the issue could be related to the node or storage.
If another node is available, migrating the VM to a different node and testing there could also be a good approach.
 
VM 1 :
______________________


agent: 1
bios: ovmf
boot: order=ide1;ide2;ide0;net0
cores: 2
cpu: host
efidisk0: local-zfs:vm-8000-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
ide0: local:iso/virtio-win-0.1.285.iso,media=cdrom,size=771138K
ide1: local-zfs:vm-8000-disk-3,size=70G
ide2: none,media=cdrom
machine: pc-q35-10.0+pve1
memory: 2048
meta: creation-qemu=10.0.2,ctime=1760954675
name: HS-SVR-001
net0: virtio=xx:xx:xx:xx:xx:xx,bridge=vmbr1,firewall=1
numa: 0
ostype: win11scsihw: virtio-scsi-single
smbios1: uuid=a9117bf1-db30-4534-a71d-9353da7cf1a0
sockets: 1
tpmstate0: local-zfs:vm-8000-disk-2,size=4M,version=v2.0
vmgenid: 2fd36ffd-40b7-47f0-9802-77314bcf8b6b
#qmdump#map:efidisk0:drive-efidisk0:local-zfs:raw:
#qmdump#map:ide1:drive-ide1:local-zfs:raw:
#qmdump#map:tpmstate0:drive-tpmstate0-backup:local-zfs:raw:

VM 2 :
___________________

bios: ovmf
boot: order=scsi0;ide2;net0
cores: 4
cpu: x86-64-v2-AES
efidisk0: local-zfs:vm-300-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
ide2: none,media=cdrom
machine: pc-q35-10.0+pve1
memory: 6144
meta: creation-qemu=10.0.2,ctime=1759418168
name: HS-WIN-11-001
net0: virtio=xx:xx:xx:xx:xx:xx,bridge=vmbr1,firewall=1,rate=20
numa: 0
onboot: 1
ostype: win11
scsi0: local-zfs:vm-300-disk-1,iothread=1,size=64G
scsihw: virtio-scsi-single
smbios1: uuid=dccc25f9-5054-4c3a-bcc2-ab9d58155ca1sockets: 1
tpmstate0: local-zfs:vm-300-disk-2,size=4M,version=v2.0
vmgenid: 519b8e04-de56-48d6-9aeb-cf2b5275e5e8
#qmdump#map:efidisk0:drive-efidisk0:local-zfs:raw:
#qmdump#map:scsi0:drive-scsi0:local-zfs:raw:
#qmdump#map:tpmstate0:drive-tpmstate0-backup:local-zfs:raw:

and 5 more with the same config as VM2
 
That tells us only how your VMs are configured and nothing about your SSDs/NVMEs. Whats obvious: all VMs seem to run on on the local-zfs which is (in normal setups) used by PVE itself. In VM1 you run the vssd as IDE, in the second one VirtIO SCSI is used.

Usually a separate storage should be used for VMs. So if - for example - your local-zfs consists of 2 consumer grade SSDs (or even worser: HDDs) you‘ll run in high I/O waits. Even 2 enterprise grade SSDs will not perform excellent with PVE using this storage AND in addition 7 VMs.
 
Another issue could be your VM hardware, i would assign more cores and more ram, windows 11 will be slow if you run it with hardware assigned like your VM1 (2 cores = 1 physical cpu core, 2GB ram only for windows?)

7 windows VM's on 2x1TB consumer ssd's?
 
Another issue could be your VM hardware, i would assign more cores and more ram, windows 11 will be slow if you run it with hardware assigned like your VM1 (2 cores = 1 physical cpu core, 2GB ram only for windows?)

7 windows VM's on 2x1TB consumer ssd's?

I have 2 x 1TB NVME "SAMSUNG MZVLB1T0HALR-00000"
for VM1 : it s a Win Server, 2 vCPU, and 2GB RAM is totally fine ..

the PVE server is a dedicated Cloud Server with no GPU on Board ...