Windows 11 Very Low Performance

Patryk803

Member
May 25, 2022
17
1
8
I've just installed Proxmox on my mini PC. It's weird because the CPU is very low according to Proxmox dashboard - around 1% but RAM is around 18GBs when there's just Windows launched. I've allocated 20GBs of RAM thinking that would be an overkill but it seems it's not.

Proxmox host:
CPU: i7-1165G7
RAM: 64GB
RAID 1 ZFS

Windows 11 VM:Screenshot 2022-08-02 at 21.09.43.png


Config:
root@proxmox:~# qm config 101
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LC_TERMINAL = "iTerm2",
LC_CTYPE = "UTF-8",
LANG = "pl_PL.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("pl_PL.UTF-8").
agent: 1
bios: ovmf
boot: order=scsi0;ide2;net0;ide0
cores: 8
efidisk0: local-zfs:vm-101-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
ide0: local:iso/virtio-win-0.1.221.iso,media=cdrom,size=519030K
ide2: local:iso/Win11_Polish_x64v1.iso,media=cdrom,size=5332826K
machine: pc-q35-6.2
memory: 20480
meta: creation-qemu=6.2.0,ctime=1659462817
name: windows
net0: virtio=06:0E:8A:4F:F6:B6,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
scsi0: local-zfs:vm-101-disk-1,size=300G
scsihw: virtio-scsi-pci
smbios1: uuid=bb72fafe-d117-45e2-b8aa-9fc76442af63
sockets: 1
tpmstate0: local-zfs:vm-101-disk-2,size=4M,version=v2.0
vmgenid: 7e5a0975-bae9-4985-8afe-2a23c05d2ce8

Would appreciate any hints how to TSHOOT this.
 
You may want to change your Display GPU settings to VirtIO-GPU, or VirGL-GPU(for 3d acceleration). A poor display experience will make the whole VM seem slow and sluggish.
 
You may want to change your Display GPU settings to VirtIO-GPU, or VirGL-GPU(for 3d acceleration). A poor display experience will make the whole VM seem slow and sluggish.
That doesn't explain why usage of the RAM is very high and the CPU very low. I've tried that but didn't change anything.
 
You said "raid 1 zfs" with "64 GB RAM" so ZFS will by default use up to 32GB of RAM. Doesn't have to be the Win11 VM. Or did you check that the Win11 VM is using that RAM? You could for example do that with htop (you have to install it first) and then have a look at the "RES" of the KVM process that got the VMID of your Win11 VM.

And without GPU acceleration, so without PCI passthrough of a GPU, Windows will always feel very slow, as everthing will have to be rendered in software by the CPU.
 
Last edited:
You said "raid 1 zfs" with "64 GB RAM" so ZFS will by default use up to 32GB of RAM. Doesn't have to be the Win11 VM. Or did you check that the Win11 VM is using that RAM? You could for example do that with htop (you have to install it first) and then have a look at the "RES" of the KVM process that got the VMID of your Win11 VM.

And without GPU acceleration, so without PCI passthrough of a GPU, Windows will always feel very slow, as everthing will have to be rendered in software by the CPU.
Why it will use up 32GB RAM by default? That's not true. When my Windows VM is not running I can see the proxmox itself doesn't consume much RAM
 
You can run arc_summary to see how much RAM ZFS is using at the moment and what it is allowed to use. Have a look at the ARC sizes.
By default it should dynamically use something between 1 and 32 GB RAM.
 
Last edited:
  • Like
Reactions: fuomag9
You can run arc_summary to see how much RAM ZFS is using at the moment and what it is allowed to use. Have a look at the ARC sizes.
By default it should dynamically use something between 1 and 32 GB RAM.
Code:
[ICODE]root@proxmox:~# arc_summary

------------------------------------------------------------------------
ZFS Subsystem Report                            Wed Aug 03 10:47:13 2022
Linux 5.15.30-2-pve                                           2.1.4-pve1
Machine: proxmox (x86_64)                                     2.1.4-pve1

ARC status:                                                      HEALTHY
        Memory throttle count:                                         0

ARC size (current):                                    94.9 %   29.7 GiB
        Target size (adaptive):                       100.0 %   31.3 GiB
        Min size (hard limit):                          6.2 %    2.0 GiB
        Max size (high water):                           16:1   31.3 GiB
        Most Frequently Used (MFU) cache size:         54.8 %   15.8 GiB
        Most Recently Used (MRU) cache size:           45.2 %   13.0 GiB
        Metadata cache size (hard limit):              75.0 %   23.5 GiB
        Metadata cache size (current):                  4.6 %    1.1 GiB
        Dnode cache size (hard limit):                 10.0 %    2.3 GiB
        Dnode cache size (current):                     1.4 %   34.1 MiB

ARC hash breakdown:
        Elements max:                                               2.8M
        Elements current:                             100.0 %       2.8M
        Collisions:                                               787.5k
        Chain max:                                                     7
        Chains:                                                   381.7k

ARC misc:
        Deleted:                                                      22
        Mutex misses:                                                 11
        Eviction skips:                                                1
        Eviction skips due to L2 writes:                               0
        L2 cached evictions:                                     0 Bytes
        L2 eligible evictions:                                   3.9 GiB
        L2 eligible MFU evictions:                    100.0 %    3.9 GiB
        L2 eligible MRU evictions:                    < 0.1 %  391.5 KiB
        L2 ineligible evictions:                                 2.5 MiB

ARC total accesses (hits + misses):                                19.0M
        Cache hit ratio:                               98.3 %      18.7M
        Cache miss ratio:                               1.7 %     314.8k
        Actual hit ratio (MFU + MRU hits):             98.3 %      18.7M
        Data demand efficiency:                        97.4 %       9.5M
        Data prefetch efficiency:                      11.4 %      73.1k

Cache hits by cache type:
        Most frequently used (MFU):                    77.5 %      14.5M
        Most recently used (MRU):                      22.4 %       4.2M
        Most frequently used (MFU) ghost:               0.1 %      18.1k
        Most recently used (MRU) ghost:                 0.0 %          0

Cache hits by data type:
        Demand data:                                   49.6 %       9.3M
        Demand prefetch data:                         < 0.1 %       8.3k
        Demand metadata:                               50.3 %       9.4M
        Demand prefetch metadata:                       0.1 %      13.4k

Cache misses by data type:
        Demand data:                                   77.5 %     243.8k
        Demand prefetch data:                          20.6 %      64.7k
        Demand metadata:                                1.5 %       4.8k
        Demand prefetch metadata:                       0.4 %       1.4k

DMU prefetch efficiency:                                            2.7M
        Hit ratio:                                      4.2 %     113.2k
        Miss ratio:                                    95.8 %       2.6M

VDEV cache disabled, skipping section

ZIL committed transactions:                                         2.1M
        Commit requests:                                           34.5k
        Flushes to stable storage:                                 34.4k
        Transactions to SLOG storage pool:            0 Bytes          0
        Transactions to non-SLOG storage pool:       44.2 GiB     398.7k[/ICODE]
 
Yes, will change it to 8, but the strange thing is I still have around 20GB RAM Free and my Windows 11 Machine is very slow ...
 
Bigger ARC should lower the io delay as less data has to be read from the disks because it is cached in RAM.

I guess you are using HDDs? You should try a zfs mirror of enterprise SSDs. Such a high io delay can easily slow down all your guests.
HDDs are terrible as a VM/LXC storage because of the bad IOPS performance.

High IO delay basically means your CPU can't do its job because it has to wait most of the time of the disks to send/receive data.
 
I see. I'm using SSDs. 1. M.2 PCIE 2. Crucial MX500 Sata III. Are they not good enough? They're not the cheapest SSDs.
 
In general consumer SSDs aren't recommended for server workloads or ZFS. Enterprise SSD cost easily 3 to 10 times more.
But 50% IO delay is still to high for TLC SSDs.
 
Thanks, you helped me a lot. Is there any other way to do the RAID without ZFS on something like this? SHOULD I use BTFRS then?image.jpg
 
Last edited:
The crucial P2 can use QLC or TLC NAND. In case its a QLC NAND it might be even slower than the MX500 (atleast for writes).
 
Hmm. In that case I think I will need to give up on RAID 1 and just use M.2 as the one and only SSD and then do the backups to my Synology NAS. Can you think of any better solution than this?
 
You should always do backups, no matter if you use raid1 or not. But yes, I really would recommend using raid1/mirroring so you get less downtime and less work setting up everything again when your SSDs are failing. And then will fail sooner or later, the question is how fast. SSDs are consumables that will damage with each bit of write, like the tires on a car that will wear with each mile you drive. So best to plan accordingly and always have some recent backups and best case also some parity so a failing SSDs won't affect you.

First you should backup all your config and guests to your NAS. Then you can wipe the SSDs and benchmark each of them with fio to see which disk got the best latency/throughput/IOPS performance. Then you could try a PVE installation with LVM-Thin on the single faster disk, restore your guests from the NAS and see if that is enough to bring the IO delay down. If it doesn't you might want to buy an enterprise SSD. Something like a SEDC500M/480G or MTFDHBA480TDF-1AW1ZABYY isn't that expensive.
 
Last edited:
First you should backup all your config and guests to your NAS. Then you can wipe the SSDs and benchmark each of them with fio to see which disk go the best latency/throughput/IOPS performance. Then you could try a PVE installation with LVM-Thin on the single faster disk, restore your guests from the NAS and see if that is enough to bring the IO delay down. If it doesn't you might want to buy an enterprise SSD. Something like a SEDC500M/480G or MTFDHBA480TDF-1AW1ZABYY isn't that expensive.
That's good advice, I would also avoid over-allocating RAM and CPU resources - it can often be counter-productive.

That hardware is never going to offer great VM performance but see how your Windows VM runs with 2cpu and 8gb of ram and then go from there in small steps.
 
You should always do backups, no matter if you use raid1 or not. But yes, I really would recommend using raid1/mirroring so you get less downtime and less work setting up everything again when your SSDs are failing. And then will fail sooner or later, the question is how fast. SSDs are consumables that will damage with each bit of write, like the tires on a car that will wear with each mile you drive. So best to plan accordingly and always have some recent backups and best case also some parity so a failing SSDs won't affect you.

First you should backup all your config and guests to your NAS. Then you can wipe the SSDs and benchmark each of them with fio to see which disk got the best latency/throughput/IOPS performance. Then you could try a PVE installation with LVM-Thin on the single faster disk, restore your guests from the NAS and see if that is enough to bring the IO delay down. If it doesn't you might want to buy an enterprise SSD. Something like a SEDC500M/480G or MTFDHBA480TDF-1AW1ZABYY isn't that expensive.
Isn't RAID1 sort of a backup itself? It's rare condition to have 2 SSDs failing at the same time. Could you also recommend enterprise level nvme m.2 SSD?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!