Slow Windows Performance RDP Warehouse software RAM Problem?

Feb 26, 2021
4
0
1
30
Hello Community,

We have 2 Dell Servers we want to virtualize with Proxmox. One runs our SQL DB and the other acts as an entrypoint for our RDP Clients.

Since the SQL DB is critical it wanted to test with the RDP Entrypoint.
I am facing very poor performance with the RDP Clients. The startup of our Warehousesoftware takes 30-40 seconds.
On my Laptop and our old Baremetalserver Dell T30 it takes about 10-15 seconds . If only the startup would take longer this would be no problem but every action takes about 2-3 times longer.

Is there any way to improve the performance?
If you need more Benchmarks or specs i feel free to ask. Any advice is welcome :)



I did some benchmarking with PassMark and got pretty bad RAM results:
benchmark.jpg

The CPU Utilization during startup of the Software
CPU.jpg
Our VM Setup:
Screenshot 2021-03-10 082914.jpg
Screenshot 2021-03-10 082903.jpg

Our Hardware:

Dell R720
CPU(s) 32 x Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz (2 Sockets)
96 GB of RAM
Raidcontroller flashed to IT Mode
VM Disks lay on a ZFS RAID10 with 4 Intel DC S3510 Series 480GB
 
Two easy config changes that might help:
  • Don't use CPU mode "qemu64". It is optimized for compatibility only. If you don't care about live-migration, use CPU mode "host", otherwise select the best-fitting, still compatible model (Intel if you use Xeon)
  • You can try to use the CLI-only hugepages option, this can especially help with memory-heavy use-cases. To do so, run qm set <vmid> -hugepage 2 or qm set <vmid> -hugepage 1024 (latter might not be supported, or lead to trouble starting up on a somewhat loaded host)
If this doesn't help, try monitoring the host during VM runs (web GUI stats, (h)top, free, etc...), that should give a clearer indication of performance bottlenecks than monitoring inside the VM.
 
Two easy config changes that might help:
  • Don't use CPU mode "qemu64". It is optimized for compatibility only. If you don't care about live-migration, use CPU mode "host", otherwise select the best-fitting, still compatible model (Intel if you use Xeon)
  • You can try to use the CLI-only hugepages option, this can especially help with memory-heavy use-cases. To do so, run qm set <vmid> -hugepage 2 or qm set <vmid> -hugepage 1024 (latter might not be supported, or lead to trouble starting up on a somewhat loaded host)
If this doesn't help, try monitoring the host during VM runs (web GUI stats, (h)top, free, etc...), that should give a clearer indication of performance bottlenecks than monitoring inside the VM.


Hello Stefan,

Thank you for the tips but both are not working for me.

  • CPU Mode: If I use "host" or "sandybridge" i get a Bluescreen KMode Exception not handled.
  • Hugepage:
    • "TASK ERROR: start failed: hugepage allocation failed at /usr/share/perl5/PVE/QemuServer/Memory.pm line 542."

In the webgui or Htop i cant identify any bottleneck.

This is the Free output:
Code:
root@pve:~# free
              total        used        free      shared  buff/cache   available
Mem:       98941508    32201240    60695444       84592     6044824    65755772
Swap:      16777212      653636    16123576
 
CPU Mode: If I use "host" or "sandybridge" i get a Bluescreen KMode Exception not handled.
Well that should not happen... Especially with "host". Anything in the logs maybe (journalctl -e), at the time of the bluescreen? Does this happen with newly installed Windows instances too (or maybe even test a Linux VM)?

Hugepage:
  • "TASK ERROR: start failed: hugepage allocation failed at /usr/share/perl5/PVE/QemuServer/Memory.pm line 542."
Did you try with hugepages size 2 or 1024? But yeah, that's a bit more expected. Memory fragmentation can ruin this for you. On the other hand, seeing as you appear to have quite a bit of memory available in your 'free' output, it is a bit weird. Have you tried rebooting the host, if possible, before starting the VM with hugepages 2 enabled?
 
Well that should not happen... Especially with "host". Anything in the logs maybe (journalctl -e), at the time of the bluescreen? Does this happen with newly installed Windows instances too (or maybe even test a Linux VM)?
If I install Windows Server 2019 no Problems. But with Windows Server 2016 Essentials, which is the operating System of that System i get following Message while Installing.:

1615461014179.png
Journalctl -e Output:

Code:
Mar 11 12:05:25 pve pvedaemon[34498]: <root@pam> end task UPID:pve:00006752:005C1EB6:6049F974:vncproxy:105:root@pam: Failed to run vncproxy.
Mar 11 12:05:28 pve pvedaemon[26664]: start VM 105: UPID:pve:00006828:005C205A:6049F978:qmstart:105:root@pam:
Mar 11 12:05:28 pve pvedaemon[7125]: <root@pam> starting task UPID:pve:00006828:005C205A:6049F978:qmstart:105:root@pam:
Mar 11 12:05:28 pve systemd[1]: Started 105.scope.
Mar 11 12:05:28 pve systemd-udevd[26694]: Using default interface naming scheme 'v240'.
Mar 11 12:05:28 pve systemd-udevd[26694]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Mar 11 12:05:28 pve systemd-udevd[26694]: Could not generate persistent MAC address for tap105i0: No such file or directory
Mar 11 12:05:29 pve kernel: device tap105i0 entered promiscuous mode
Mar 11 12:05:29 pve systemd-udevd[26694]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Mar 11 12:05:29 pve systemd-udevd[26694]: Could not generate persistent MAC address for fwbr105i0: No such file or directory
Mar 11 12:05:30 pve systemd-udevd[26694]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Mar 11 12:05:30 pve systemd-udevd[26694]: Could not generate persistent MAC address for fwpr105p0: No such file or directory
Mar 11 12:05:30 pve systemd-udevd[26697]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Mar 11 12:05:30 pve systemd-udevd[26697]: Using default interface naming scheme 'v240'.
Mar 11 12:05:30 pve systemd-udevd[26697]: Could not generate persistent MAC address for fwln105i0: No such file or directory
Mar 11 12:05:30 pve kernel: fwbr105i0: port 1(fwln105i0) entered blocking state
Mar 11 12:05:30 pve kernel: fwbr105i0: port 1(fwln105i0) entered disabled state
Mar 11 12:05:30 pve kernel: device fwln105i0 entered promiscuous mode
Mar 11 12:05:30 pve kernel: fwbr105i0: port 1(fwln105i0) entered blocking state
Mar 11 12:05:30 pve kernel: fwbr105i0: port 1(fwln105i0) entered forwarding state
Mar 11 12:05:30 pve kernel: vmbr0: port 4(fwpr105p0) entered blocking state
Mar 11 12:05:30 pve kernel: vmbr0: port 4(fwpr105p0) entered disabled state
Mar 11 12:05:30 pve kernel: device fwpr105p0 entered promiscuous mode
Mar 11 12:05:30 pve kernel: vmbr0: port 4(fwpr105p0) entered blocking state
Mar 11 12:05:30 pve kernel: vmbr0: port 4(fwpr105p0) entered forwarding state
Mar 11 12:05:30 pve kernel: fwbr105i0: port 2(tap105i0) entered blocking state
Mar 11 12:05:30 pve kernel: fwbr105i0: port 2(tap105i0) entered disabled state
Mar 11 12:05:30 pve kernel: fwbr105i0: port 2(tap105i0) entered blocking state
Mar 11 12:05:30 pve kernel: fwbr105i0: port 2(tap105i0) entered forwarding state
Mar 11 12:05:30 pve pvedaemon[7125]: <root@pam> end task UPID:pve:00006828:005C205A:6049F978:qmstart:105:root@pam: OK
Mar 11 12:05:30 pve pvedaemon[16987]: <root@pam> starting task UPID:pve:00006897:005C2124:6049F97A:vncproxy:105:root@pam:
Mar 11 12:05:30 pve pvedaemon[26775]: starting vnc proxy UPID:pve:00006897:005C2124:6049F97A:vncproxy:105:root@pam:
Mar 11 12:05:30 pve pveproxy[7788]: worker 42437 finished
Mar 11 12:05:30 pve pveproxy[7788]: starting 1 worker(s)
Mar 11 12:05:30 pve pveproxy[7788]: worker 26779 started
Mar 11 12:05:31 pve pveproxy[26777]: got inotify poll request in wrong process - disabling inotify
Mar 11 12:05:35 pve pveproxy[26777]: worker exit
Mar 11 12:05:56 pve pvedaemon[16987]: <root@pam> end task UPID:pve:00006897:005C2124:6049F97A:vncproxy:105:root@pam: OK
Mar 11 12:05:56 pve pvedaemon[28954]: starting termproxy UPID:pve:0000711A:005C2B4B:6049F994:vncshell::root@pam:
Mar 11 12:05:56 pve pvedaemon[16987]: <root@pam> starting task UPID:pve:0000711A:005C2B4B:6049F994:vncshell::root@pam:
Mar 11 12:05:56 pve pvedaemon[7125]: <root@pam> successful auth for user 'root@pam'
Mar 11 12:05:56 pve login[29089]: pam_unix(login:session): session opened for user root by root(uid=0)
Mar 11 12:05:56 pve systemd-logind[6966]: New session 77 of user root.
Mar 11 12:05:56 pve systemd[1]: Started Session 77 of user root.
Mar 11 12:05:56 pve login[29098]: ROOT LOGIN  on '/dev/pts/1'

Did you try with hugepages size 2 or 1024? But yeah, that's a bit more expected. Memory fragmentation can ruin this for you. On the other hand, seeing as you appear to have quite a bit of memory available in your 'free' output, it is a bit weird. Have you tried rebooting the host, if possible, before starting the VM with hugepages 2 enabled?

I will test this after business is closed today.

Thanks for the help!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!