Ramdom Windows Server 2019 shutdown ID 41

Oasisnet77

Member
Dec 30, 2021
14
1
8
47
Hi all,
i have a Proxmox VE installed with 4 vm running. a win src 2019 a win 10 and 2 linux (Debian)
sometimes happens that only the win server shutdown and the event view display error id 41 (kernel Power)
on this vm there are the latest virtue drivers installed and there are no problem of space on disks or other. no backup or snap running in that moment
it could happen once a week or 20 days and sometimes twice a day.

any idea?

thx a lot
kind Regards
 
hi,
thanks for your reply.
I attach here the vm config and eve version.

in journalctl I found this logs... could it be a memory overload?

Dec 30 10:18:42 pve kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/qemu.slice/104.scope,task=kvm,pid=16238,uid=0

Dec 30 10:18:42 pve kernel: Out of memory: Killed process 16238 (kvm) total-vm:17751844kB, anon-rss:16801708kB, file-rss:6808kB, shmem-rss:4kB, UID:0 pgtables:33480kB oom_score_adj:0

Dec 30 10:18:43 pve kernel: oom_reaper: reaped process 16238 (kvm), now anon-rss:0kB, file-rss:36kB, shmem-rss:4kB

Dec 30 10:18:44 pve kernel: fwbr104i0: port 2(tap104i0) entered disabled state

Dec 30 10:18:44 pve kernel: fwbr104i0: port 2(tap104i0) entered disabled state

Dec 30 10:18:44 pve systemd[1]: 104.scope: Succeeded.

Dec 30 10:18:44 pve qmeventd[3560]: Starting cleanup for 104

Dec 30 10:18:44 pve kernel: fwbr104i0: port 1(fwln104i0) entered disabled state

Dec 30 10:18:44 pve kernel: fwbr104i0: topology change detected, propagating

Dec 30 10:18:44 pve kernel: vmbr0: port 4(fwpr104p0) entered disabled state

Dec 30 10:18:44 pve kernel: device fwln104i0 left promiscuous mode

Dec 30 10:18:44 pve kernel: fwbr104i0: port 1(fwln104i0) entered disabled state

Dec 30 10:18:44 pve kernel: device fwpr104p0 left promiscuous mode

Dec 30 10:18:44 pve kernel: vmbr0: port 4(fwpr104p0) entered disabled state

Dec 30 10:18:45 pve qmeventd[3560]: Finished cleanup for 104
-----------------------------------------------------------------------
agent: 0
balloon: 0
boot: cdn
bootdisk: virtio1
cores: 4
cpu: host
hotplug: disk,network,usb,memory,cpu
ide2: none,media=cdrom
memory: 16384
name: win2019-easylex
net0: e1000=02:BA:55:8A:1F:F6,bridge=vmbr0,firewall=1
numa: 1
ostype: win10
parent: autodaily211230220008
scsihw: virtio-scsi-pci
smbios1: uuid=535ec99c-27b5-4f4b-87ff-f184ba52c198
sockets: 1
unused0: datapool:vm-104-disk-0
virtio0: datapool:vm-104-disk-1,size=300G
virtio1: local-zfs:vm-104-disk-0,size=50G
vmgenid: 8a429c6d-1ac9-4495-8881-9d492b2377a1

--------------------------------------------------------------------------------------------

proxmox-ve: 6.4-1 (running kernel: 5.4.128-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-5
pve-kernel-helper: 6.4-5
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.12-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.2-4
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1

----------------------------
 
hi,
Thank you for the output!
Dec 30 10:18:42 pve kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/qemu.slice/104.scope,task=kvm,pid=16238,uid=0

oom-kill: I would try to enable the balloon in that VM, otherwise try to give it less ram
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!