VM problem freezes randomdly

renzob80

Member
Feb 26, 2021
4
2
8
36
Hello everyone, I have a problem with two virtualized servers with Proxmox and I am relatively new to working with this system, the Windows Server 2022 virtual machine freezes for a while and then works again, if anyone has any reference to the problem I would greatly appreciate it, the Proxmox system is now updated as is the Windows Server.
 
the Windows Server 2022 virtual machine freezes for a while and then works again
  • How can you imagine that? Is it just the Desktop that freezes?
  • Is the VM still accessible via ping?
  • And does it work again by itself after some time, without any action?

Please give me the output of pveversion -v and also post your VM config of your Windows Server 2022.

Code:
qm config <vmid>
 
  • Like
Reactions: renzob80
Thank you very much Fireon, your response is appreciated.

. Yes, only the desktop freezes, meaning the entire virtual machine.

. No, when the virtual machine stops, the ping is lost.

. Yes, it starts working again without any action.

. I also think that when the virtual machine stops, the fans on my server start to sound louder.

config VM

agent: 1
balloon: 0
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 40
cpu: host
efidisk0: local-zfs:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
ide2: none,media=cdrom
machine: pc-i440fx-8.1
memory: 102400
meta: creation-qemu=8.1.5,ctime=1713577297
name: win2022
net0: virtio=BC:24:11:90:24:12,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
scsi0: local-zfs:vm-100-disk-1,cache=none,iothread=1,size=2000G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=944afad1-e32a-4654-af80-5ceb5c33b1c2
sockets: 2
unused0: local-zfs:vm-100-disk-2
vga: virtio
vmgenid: 5087aff3-15a0-479e-b000-6a57666c965e



pve version

proxmox-ve: 8.1.0 (running kernel: 6.5.11-8-pve)
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.5: 6.5.11-8
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
ceph-fuse: 17.2.7-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.4-1
proxmox-backup-file-restore: 3.1.4-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.4
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-3
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.2.0
pve-qemu-kvm: 8.1.5-2
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve1
 
Last edited:
  • Like
Reactions: fireon
cores: 40
sockets: 2
What CPU(s) does your Proxmox host have? Are you trying to give a single VM all the threads? If you do have two sockets, your probably want to enable the NUMA setting.
Committing (almost) all of any resource to a single VM causes lag and high latency. Try running it with a normal amount like 4 cores and it might run a lot smoother.

proxmox-ve: 8.1.0 (running kernel: 6.5.11-8-pve)
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)
Maybe update your Proxmox to the latest version as bugs are fixed.
 
  • Like
Reactions: fireon
It has two processors, 20 cores each processor

The resources are because they asked me for them as a requirement for a system that they are installing. I also think that it is a lot for a system. In any case, I will lower the number of cores and try again. Thank you very much.
 
It has two processors, 20 cores each processor
That's 80 threads (and probably NUMA) and you leave nothing for other VMs or Proxmox which needs to do graphs, device emulation, virtual network, etc. That's why your VM cannot be scheduled (and appears frozen); other work needs to be done like writing logs to disk.
The resources are because they asked me for them as a requirement for a system that they are installing. I also think that it is a lot for a system. In any case, I will lower the number of cores and try again. Thank you very much.
If you want to run VMs with 80 virtual cores, use a host system with something like 320 threads and then you can run multiple of those VMs as most of them are idle most of the time. "they" clearly appear to not know what "they" are asking for or hwo virtualization works.
 
  • Like
Reactions: renzob80
The total number of cores is 88, I left 8 cores for basic processes, even so, with the change I made going down to 20 cores and activating Numa (which I thought was only for PCI pastrought) it seems to work well. Now I will test it better and with faith it is already solved. Thank you very much for your contribution, leekteken , I hope to return the help to someone who has a problem.
 
  • Like
Reactions: fireon
It has two processors, 20 cores each processor
The total number of cores is 88, I left 8 cores for basic processes
Two CPUs with 20 cores is only 40 (real) cores. With hyper-threading/SMT that's 80 threads (half of which count like 5-25% or a real core). How did you get to 88 cores?

with the change I made going down to 20 cores and activating Numa (which I thought was only for PCI pastrought) it seems to work well. Now I will test it better and with faith it is already solved.
The NUMA setting can be important if each of the two CPUs has half the memory Proxmox will allocate half the VM memory on each of the CPUs. Accessing the other half takes longer and passing this information to the VM (also without passthrough) improves the scheduling of processes inside the VM.
Thank you for reporting back and glad to hear that it is indeed most likely over-comiting on virtual cores.