Windows Server 2016 very slow in PVE

Vitinhos

New Member
Jun 16, 2023
14
0
1
Good morning/good afternoon/good evening everyone!

I have a Windows Server 2016 Standard VM here in my PVE, but it is very slow to open, load files and programs. Processor and memory I believe is the bottleneck as I have a lot and when monitoring it reaches a maximum of 60% usage. Company users connect via TS locally to this Windows Server.
The VM is located at: local-lvm (pve).
Below is a printout of the VM configurations. The PVE's storage is two 1TB Samsung SSDs, connected in RAID 1. So one SSD is used as storage and the other is doing the copying.
What could this slowness be? And how to solve it?
Thanks.
 

Attachments

  • Screenshot_1.png
    Screenshot_1.png
    15 KB · Views: 32
  • Screenshot_2.png
    Screenshot_2.png
    4.6 KB · Views: 28
  • Screenshot_3.png
    Screenshot_3.png
    8.5 KB · Views: 27
try maxing out the video memory size to 32MB I believe that is the maximum. Anything more than that is wasteful anyway.
 
host cpu model ? guest cpu type ?
model Samsung ssd ?
raid hw model ? with bbu ?
 
A: Don't use IDE disks on Windows. Use full Virtio.

B: What is your IOwait?

C: Raid 1 is never for speed, its for redundancy. So thats always gonna be slow.
 
R: Não use discos IDE no Windows. Use Virtio completo.

B: Qual é o seu IOwait?

C: Raid 1 nunca é para velocidade, é para redundância. Então isso sempre será lento.
R: Can I switch to VIRTIO now after creating the disk?
B: How do I verify this information?
C: Is RAID 1 the problem? Wouldn't I have to use ZFS?
 
SSD Samsung 870 EVO 1TB (2X)
Raid Hardware
2x Intel(R) Xeon(R) CPU E5649
As usual: consumer SSDs shall not be used in production systems. They quickly run out of cache and wear out very fast. And hardware RAID is not advisable if you want to use ZFS.
 
As usual: consumer SSDs shall not be used in production systems. They quickly run out of cache and wear out very fast. And hardware RAID is not advisable if you want to use ZFS.
So a PVE should not use SSD for production environments? Should I use HD?
Shouldn't SSD be faster?
 
So a PVE should not use SSD for production environments? Should I use HD?
Shouldn't SSD be faster?
Lol - no no… SSDs or NVME are always better than HDDs. You have Samsung EVO models in use. These are typical consumer SSDs which are not designed for heavy loads. They have no power loss protection (plp), lower cache and die relatively fast. Consider using enterprise grade SSDs, like the Samsung PM/SM series, Kingston DC series, etc. Beside that a simple RAID1 is - as stated earlier in this thread - just a mirror. You‘ll get better performance with RAID1+0 (striped mirror). And if your RAID controller is the only connection point for drives you should check if it‘s able to run in IT-mode (it acts as a HBA). Then you could use your drives with the ZFS system.
 
  • Like
Reactions: _gabriel
+ raid hw controller disable writecache of disk because they have own writecache accelerator protect by battery (bbu), this allow hotplug too, so consumer ssd is even slower.
(using ext4/Lvmthin with consumer ssd + pbs to backup, one or twice per day to the another, will be more reliable. software Linux raid can be used for system, but need Linux experience, not supported by pve but works like any Linux distro).

+ old cpu like yours need mitigations=off kernel option + mitigations disabled in Windows too, to have regular performance.
 
Last edited:
+ raid hw controller disable writecache of disk because they have own writecache accelerator protect by battery (bbu), this allow hotplug too, so consumer ssd is even slower.
(using ext4/Lvmthin with consumer ssd + pbs to backup, one or twice per day to the another, will be more reliable. software Linux raid can be used for system, but need Linux experience, not supported by pve but works like any Linux distro).

+ old cpu like yours need mitigations=off kernel option + mitigations disabled in Windows too, to have regular performance.
But then the problem wouldn't be my CPU, as it's working fine there.
E referente ao controlador de RAID, você está me dizendo que ele seria um dos problemas ? E teria como desativar essa opção de writecache nele ?
 
+ raid hw controller disable writecache of disk because they have own writecache accelerator protect by battery (bbu), this allow hotplug too, so consumer ssd is even slower.
This is a fact not an advice. You can't do anything: consumers SSD aren't for raid.
 
R: Can I switch to VIRTIO now after creating the disk?
B: How do I verify this information?
C: Is RAID 1 the problem? Wouldn't I have to use ZFS?
I have not seen issues moving between interface types. I certainly have moved between sata, ide, and virtio. Virtio is always preferred for Linux VMs as mentioned in the thread, and if building windows you should make sure to have access to the drivers.
Here is a link to the KVM drivers site if not already installed.

https://github.com/virtio-win/kvm-guest-drivers-windows/tree/master
 
This is a fact not an advice. You can't do anything: consumers SSD aren't for raid.
At no point did I say it was advice. I'm just asking because I've never used Proxmox before, so I'm learning things now. I had only used vSphere.
And my client is complaining about slowness on his Windows Server.
So the right thing would be for me not to use the SSD in raid?
Could you guide me on how I should do it so I don't actually have a slow Windows Server?
 
I have not seen issues moving between interface types. I certainly have moved between sata, ide, and virtio. Virtio is always preferred for Linux VMs as mentioned in the thread, and if building windows you should make sure to have access to the drivers.
Here is a link to the KVM drivers site if not already installed.

https://github.com/virtio-win/kvm-guest-drivers-windows/tree/master
The KVM drivers are already installed on my machine.
So can I change my disk interface to VIRTIO and my Windows Server will recognize it normally without errors?
 
Good morning/good afternoon/good evening everyone!

I have a Windows Server 2016 Standard VM here in my PVE, but it is very slow to open, load files and programs. Processor and memory I believe is the bottleneck as I have a lot and when monitoring it reaches a maximum of 60% usage. Company users connect via TS locally to this Windows Server.
The VM is located at: local-lvm (pve).
Below is a printout of the VM configurations. The PVE's storage is two 1TB Samsung SSDs, connected in RAID 1. So one SSD is used as storage and the other is doing the copying.
What could this slowness be? And how to solve it?
Thanks.
I didn't see anyone ask and just jumped on consumer SSD but I have seen a lot of but what is your average IO delay as reported by Proxmox?

I am running 6 Windows Server 2022 VMs with desktop experience because Microsoft friggin hates me. 2 file server shares and 4 domain controllers.

My IO delay is near 2% and every time I experience a similar situation until I added a GPU for acceleration. Server 2016/2019/2022 all benefit from GPU acceleration for desktop environment.

If you don't have a GPU, try Windows Server Core (which does not have the desktop environment). I find this a lot faster/better but cannot seem to activate with a standard license which is typical Microsoft.
 
I didn't see anyone ask and just jumped on consumer SSD but I have seen a lot of but what is your average IO delay as reported by Proxmox?

I am running 6 Windows Server 2022 VMs with desktop experience because Microsoft friggin hates me. 2 file server shares and 4 domain controllers.

My IO delay is near 2% and every time I experience a similar situation until I added a GPU for acceleration. Server 2016/2019/2022 all benefit from GPU acceleration for desktop environment.

If you don't have a GPU, try Windows Server Core (which does not have the desktop environment). I find this a lot faster/better but cannot seem to activate with a standard license which is typical Microsoft.
EDITED: Mixed topic , missing OP use RDS , so GPU can help for VDI/RDS.
(RDS can use GPU hw encoder, but with latest CPU I don't know if still needed).
but never seen GPU in files server neither domain controllers.
 
Last edited:
I didn't see anyone ask and just jumped on consumer SSD but I have seen a lot of but what is your average IO delay as reported by Proxmox?

I am running 6 Windows Server 2022 VMs with desktop experience because Microsoft friggin hates me. 2 file server shares and 4 domain controllers.

My IO delay is near 2% and every time I experience a similar situation until I added a GPU for acceleration. Server 2016/2019/2022 all benefit from GPU acceleration for desktop environment.

If you don't have a GPU, try Windows Server Core (which does not have the desktop environment). I find this a lot faster/better but cannot seem to activate with a standard license which is typical Microsoft.
How do I see this average IO delay in proxmox, please?

The idea of having a Windows Server Core is very good, but it is unfeasible in my environment because users access the server via RDP.
 
Perhaps GPU can help in special cases for VDI/RDS, but never seen GPU in files server neither domain controllers.
I've also never seen GPUs in file servers or domain controllers. But do you think this would be a good idea to improve the slowness?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!