[SOLVED] Windows VM stops and starts every 5 minutes

dabo53

New Member
Jun 15, 2024
2
1
3
Hello,
I have a 2 node cluster in my Homelab
Code:
proxmox-ve: 8.2.0 (running kernel: 6.8.4-2-pve)
pve-manager: 8.2.2 (running version: 8.2.2/9355359cd7afbae4)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.4-2
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
intel-microcode: 3.20240531.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.6
libpve-cluster-perl: 8.0.6
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.1
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.2.1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.0-1
proxmox-backup-file-restore: 3.2.0-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.1
pve-cluster: 8.0.6
pve-container: 5.0.10
pve-docs: 8.2.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.0
pve-firewall: 5.0.5
pve-firmware: 3.11-1
pve-ha-manager: 4.0.4
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-5
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2

And a recently installed Windows 11 VM (pc-q35-8.1)
Which stops and starts every 5 minutes.

Here are the system logs I could find:
Code:
Jul 01 09:35:48 pve-node2 qm[1059849]: <root@pam> starting task UPID:pve-node2:00102C0A:00C0E874:66825C54:qmstop:101:root@pam:
Jul 01 09:35:48 pve-node2 qm[1059850]: stop VM 101: UPID:pve-node2:00102C0A:00C0E874:66825C54:qmstop:101:root@pam:
Jul 01 09:35:48 pve-node2 kernel:  zd32: p1 p2 p3 p4
Jul 01 09:35:48 pve-node2 kernel: tap101i0: left allmulticast mode
Jul 01 09:35:48 pve-node2 kernel: fwbr101i0: port 2(tap101i0) entered disabled state
Jul 01 09:35:48 pve-node2 kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Jul 01 09:35:48 pve-node2 kernel: vmbr0: port 5(fwpr101p0) entered disabled state
Jul 01 09:35:48 pve-node2 kernel: fwln101i0 (unregistering): left allmulticast mode
Jul 01 09:35:48 pve-node2 kernel: fwln101i0 (unregistering): left promiscuous mode
Jul 01 09:35:48 pve-node2 kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Jul 01 09:35:48 pve-node2 kernel: fwpr101p0 (unregistering): left allmulticast mode
Jul 01 09:35:48 pve-node2 kernel: fwpr101p0 (unregistering): left promiscuous mode
Jul 01 09:35:48 pve-node2 kernel: vmbr0: port 5(fwpr101p0) entered disabled state
Jul 01 09:35:48 pve-node2 qmeventd[1137]: read: Connection reset by peer
Jul 01 09:35:48 pve-node2 qm[1059849]: <root@pam> end task UPID:pve-node2:00102C0A:00C0E874:66825C54:qmstop:101:root@pam: OK
Jul 01 09:35:48 pve-node2 systemd[1]: 101.scope: Deactivated successfully.
Jul 01 09:35:48 pve-node2 systemd[1]: 101.scope: Consumed 49.962s CPU time.
Jul 01 09:35:49 pve-node2 qmeventd[1059904]: Starting cleanup for 101
Jul 01 09:35:49 pve-node2 qmeventd[1059904]: Finished cleanup for 101
Jul 01 09:35:54 pve-node2 qm[1059992]: <root@pam> starting task UPID:pve-node2:00102C9C:00C0EAD0:66825C5A:qmstart:101:root@pam:
Jul 01 09:35:54 pve-node2 qm[1059996]: start VM 101: UPID:pve-node2:00102C9C:00C0EAD0:66825C5A:qmstart:101:root@pam:
Jul 01 09:35:54 pve-node2 systemd[1]: Started 101.scope.
Jul 01 09:35:55 pve-node2 kernel: tap101i0: entered promiscuous mode
Jul 01 09:35:55 pve-node2 kernel: vmbr0: port 5(fwpr101p0) entered blocking state
Jul 01 09:35:55 pve-node2 kernel: vmbr0: port 5(fwpr101p0) entered disabled state
Jul 01 09:35:55 pve-node2 kernel: fwpr101p0: entered allmulticast mode
Jul 01 09:35:55 pve-node2 kernel: fwpr101p0: entered promiscuous mode
Jul 01 09:35:55 pve-node2 kernel: vmbr0: port 5(fwpr101p0) entered blocking state
Jul 01 09:35:55 pve-node2 kernel: vmbr0: port 5(fwpr101p0) entered forwarding state
Jul 01 09:35:55 pve-node2 kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Jul 01 09:35:55 pve-node2 kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Jul 01 09:35:55 pve-node2 kernel: fwln101i0: entered allmulticast mode
Jul 01 09:35:55 pve-node2 kernel: fwln101i0: entered promiscuous mode
Jul 01 09:35:55 pve-node2 kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Jul 01 09:35:55 pve-node2 kernel: fwbr101i0: port 1(fwln101i0) entered forwarding state
Jul 01 09:35:55 pve-node2 kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Jul 01 09:35:55 pve-node2 kernel: fwbr101i0: port 2(tap101i0) entered disabled state
Jul 01 09:35:55 pve-node2 kernel: tap101i0: entered allmulticast mode
Jul 01 09:35:55 pve-node2 kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Jul 01 09:35:55 pve-node2 kernel: fwbr101i0: port 2(tap101i0) entered forwarding state
Jul 01 09:35:55 pve-node2 qm[1059992]: <root@pam> end task UPID:pve-node2:00102C9C:00C0EAD0:66825C5A:qmstart:101:root@pam: OK

- I migrated from one node to the other node, issue is the same
- I manually shutdown the VM, but it keeps starting after 5 minutes, and then 5minutes later again stopping
- The VM does not have pcie passthrough

I am fairly new to proxmox and I am not sure where to start troubleshooting.

Can someone please help me?
 
Hi,

Code:
Jul 01 09:35:48 pve-node2 qm[1059849]: <root@pam> starting task UPID:pve-node2:00102C0A:00C0E874:66825C54:qmstop:101:root@pam:
Jul 01 09:35:48 pve-node2 qm[1059850]: stop VM 101: UPID:pve-node2:00102C0A:00C0E874:66825C54:qmstop:101:root@pam:
These messages mean that the VM is explicitly stopped/started and is not crashing or something like that.

This sounds very similar to https://forum.proxmox.com/threads/v...ng-for-no-apparent-reason.149489/#post-677054
Did you set up monitoring/run some random scripts of the internet?
 
thank you for your quick help, and I feel stupid now :(

Yes it was exactly this! Thanks a lot :)

Now I wonder why the "Monitor-All" scripts does not detect the Qemu Agent which is installed and running in the Win11 VM
 
  • Like
Reactions: cheiss
Great! Please just mark the thread as SOLVED by editing the first post, there should be a dropdown near the title field :)

Yes it was exactly this! Thanks a lot
Running random scripts on your machines without knowing exactly what they do is never a good idea, for obvious reasons.
There are quite regularly reports on the forum where people run something they don't know about and mangle their host configuration and/or lose data.

Now I wonder why the "Monitor-All" scripts does not detect the Qemu Agent which is installed and running in the Win11 VM
You could report it upstream with the authors and see if they can figure something out, I'd say.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!