VM Resets within a few 5 minutes

dasaint80

New Member
Jun 22, 2023
8
0
1
I'm running proxmox 8.0.4 on an macpro (trashcan 2013).
Containers run great.
when I try to start a VM no matter if it's debian or ubuntu ( server or desktop) the VM resets itself with in 5 minutes.

pveversion -v

Code:
proxmox-ve: 8.0.2 (running kernel: 6.2.16-12-pve)
pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390)
pve-kernel-6.2: 8.0.5
proxmox-kernel-helper: 8.0.3
pve-kernel-5.15: 7.4-4
proxmox-kernel-6.2.16-12-pve: 6.2.16-12
proxmox-kernel-6.2: 6.2.16-12
proxmox-kernel-6.2.16-10-pve: 6.2.16-10
proxmox-kernel-6.2.16-8-pve: 6.2.16-8
pve-kernel-6.2.16-5-pve: 6.2.16-6
pve-kernel-6.2.16-3-pve: 6.2.16-3
pve-kernel-5.15.108-1-pve: 5.15.108-1
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.5
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.8
libpve-guest-common-perl: 5.0.4
libpve-http-server-perl: 5.0.4
libpve-rs-perl: 0.8.5
libpve-storage-perl: 8.0.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.2-1
proxmox-backup-file-restore: 3.0.2-1
proxmox-kernel-helper: 8.0.3
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.0.6
pve-cluster: 8.0.3
pve-container: 5.0.4
pve-docs: 8.0.4
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.8-2
pve-ha-manager: 4.0.2
pve-i18n: 3.0.5
pve-qemu-kvm: 8.0.2-5
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.7
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1

qm config 201
Code:
boot: order=scsi0;ide2;net0
cores: 4
cpu: kvm64
ide2: local:iso/ubuntu-22.04.3-desktop-amd64.iso,media=cdrom,size=4919592K
machine: pc-q35-7.2
memory: 16384
meta: creation-qemu=8.0.2,ctime=1694873106
name: VM-TEST
net0: virtio=62:88:0F:89:1A:45,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: storage:201/vm-201-disk-0.qcow2,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=9720aa93-47da-4a57-9814-ad9c8037e751
sockets: 1
vmgenid: f39a8973-0cec-4e33-87fa-be89cb12865e
 
Hi,

Did you check the syslog in the Proxmox VE and the VM looking for anything interesting?
 
This what I found, not sure why it's reseting.
I can't figure out why this is happening.

proxmox.png

Code:
Sep 19 17:46:53 ledonnemannor pveproxy[1023]: starting 1 worker(s)
Sep 19 17:46:53 ledonnemannor pveproxy[1023]: worker 2284837 started
Sep 19 17:49:18 ledonnemannor qm[2286095]: <root@pam> starting task UPID:ledonnemannor:0022E210:048DB8C1:650A175E:qmreset:201:root@pam:
Sep 19 17:49:18 ledonnemannor qm[2286095]: <root@pam> end task UPID:ledonnemannor:0022E210:048DB8C1:650A175E:qmreset:201:root@pam: OK
Sep 19 17:54:48 ledonnemannor qm[2288648]: <root@pam> starting task UPID:ledonnemannor:0022EC09:048E3991:650A18A8:qmreset:201:root@pam:
Sep 19 17:54:48 ledonnemannor qm[2288648]: <root@pam> end task UPID:ledonnemannor:0022EC09:048E3991:650A18A8:qmreset:201:root@pam: OK
Sep 19 17:54:52 ledonnemannor pvedaemon[2279488]: <root@pam> successful auth for user 'root@pam'
Sep 19 17:56:57 ledonnemannor pveproxy[2271602]: worker exit
Sep 19 17:56:57 ledonnemannor pveproxy[1023]: worker 2271602 finished
Sep 19 17:56:57 ledonnemannor pveproxy[1023]: starting 1 worker(s)
Sep 19 17:56:57 ledonnemannor pveproxy[1023]: worker 2289589 started
Sep 19 18:00:18 ledonnemannor qm[2291205]: <root@pam> starting task UPID:ledonnemannor:0022F606:048EBA68:650A19F2:qmreset:201:root@pam:
Sep 19 18:00:18 ledonnemannor qm[2291205]: <root@pam> end task UPID:ledonnemannor:0022F606:048EBA68:650A19F2:qmreset:201:root@pam: OK
Sep 19 18:05:47 ledonnemannor qm[2293879]: <root@pam> starting task UPID:ledonnemannor:00230078:048F3B47:650A1B3B:qmreset:201:root@pam:
Sep 19 18:05:48 ledonnemannor qm[2293879]: <root@pam> end task UPID:ledonnemannor:00230078:048F3B47:650A1B3B:qmreset:201:root@pam: OK
Sep 19 18:08:56 ledonnemannor pveproxy[2280444]: worker exit
Sep 19 18:08:56 ledonnemannor pveproxy[1023]: worker 2280444 finished
Sep 19 18:08:56 ledonnemannor pveproxy[1023]: starting 1 worker(s)
Sep 19 18:08:56 ledonnemannor pveproxy[1023]: worker 2295213 started
Sep 19 18:10:52 ledonnemannor pvedaemon[2267838]: <root@pam> successful auth for user 'root@pam'
Sep 19 18:11:17 ledonnemannor qm[2296474]: <root@pam> starting task UPID:ledonnemannor:00230A9B:048FBC1F:650A1C85:qmreset:201:root@pam:
Sep 19 18:11:17 ledonnemannor qm[2296474]: <root@pam> end task UPID:ledonnemannor:00230A9B:048FBC1F:650A1C85:qmreset:201:root@pam: OK
Sep 19 18:16:47 ledonnemannor qm[2298986]: <root@pam> starting task UPID:ledonnemannor:00231495:04903CEB:650A1DCF:qmreset:201:root@pam:
Sep 19 18:16:47 ledonnemannor qm[2298986]: <root@pam> end task UPID:ledonnemannor:00231495:04903CEB:650A1DCF:qmreset:201:root@pam: OK
Sep 19 18:17:01 ledonnemannor CRON[2299112]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Sep 19 18:17:01 ledonnemannor CRON[2299113]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Sep 19 18:17:01 ledonnemannor CRON[2299112]: pam_unix(cron:session): session closed for user root
Sep 19 18:18:06 ledonnemannor pvestatd[988]: auth key pair too old, rotating..
Sep 19 18:22:17 ledonnemannor qm[2301558]: <root@pam> starting task UPID:ledonnemannor:00231E96:0490BDC8:650A1F19:qmreset:201:root@pam:
Sep 19 18:22:17 ledonnemannor qm[2301558]: <root@pam> end task UPID:ledonnemannor:00231E96:0490BDC8:650A1F19:qmreset:201:root@pam: OK
Sep 19 18:25:05 ledonnemannor pveproxy[2284837]: worker exit
Sep 19 18:25:05 ledonnemannor pveproxy[1023]: worker 2284837 finished
Sep 19 18:25:05 ledonnemannor pveproxy[1023]: starting 1 worker(s)
Sep 19 18:25:05 ledonnemannor pveproxy[1023]: worker 2302768 started
Sep 19 18:26:24 ledonnemannor pveproxy[2289589]: worker exit
Sep 19 18:26:24 ledonnemannor pveproxy[1023]: worker 2289589 finished
Sep 19 18:26:24 ledonnemannor pveproxy[1023]: starting 1 worker(s)
Sep 19 18:26:24 ledonnemannor pveproxy[1023]: worker 2303335 started
Sep 19 18:26:52 ledonnemannor pvedaemon[2279488]: <root@pam> successful auth for user 'root@pam'
Sep 19 18:27:47 ledonnemannor qm[2304154]: <root@pam> starting task UPID:ledonnemannor:0023289B:04913E99:650A2063:qmreset:201:root@pam:
Sep 19 18:27:47 ledonnemannor qm[2304154]: <root@pam> end task UPID:ledonnemannor:0023289B:04913E99:650A2063:qmreset:201:root@pam: OK
Sep 19 18:30:05 ledonnemannor pveproxy[2295213]: worker exit
Sep 19 18:30:05 ledonnemannor pveproxy[1023]: worker 2295213 finished
Sep 19 18:30:05 ledonnemannor pveproxy[1023]: starting 1 worker(s)
Sep 19 18:30:05 ledonnemannor pveproxy[1023]: worker 2305106 started
Sep 19 18:33:17 ledonnemannor qm[2306670]: <root@pam> starting task UPID:ledonnemannor:00233299:0491BF73:650A21AD:qmreset:201:root@pam:
Sep 19 18:33:17 ledonnemannor qm[2306670]: <root@pam> end task UPID:ledonnemannor:00233299:0491BF73:650A21AD:qmreset:201:root@pam: OK
Sep 19 18:38:46 ledonnemannor qm[2309222]: <root@pam> starting task UPID:ledonnemannor:00233C96:0492403E:650A22F6:qmreset:201:root@pam:
Sep 19 18:38:46 ledonnemannor qm[2309222]: <root@pam> end task UPID:ledonnemannor:00233C96:0492403E:650A22F6:qmreset:201:root@pam: OK
Sep 19 18:39:58 ledonnemannor pvedaemon[2254420]: worker exit
Sep 19 18:39:58 ledonnemannor pvedaemon[1016]: worker 2254420 finished
Sep 19 18:39:58 ledonnemannor pvedaemon[1016]: starting 1 worker(s)
Sep 19 18:39:58 ledonnemannor pvedaemon[1016]: worker 2309814 started
Sep 19 18:42:52 ledonnemannor pvedaemon[2279488]: <root@pam> successful auth for user 'root@pam'
Sep 19 18:44:16 ledonnemannor qm[2311820]: <root@pam> starting task UPID:ledonnemannor:00234692:0492C108:650A2440:qmreset:201:root@pam:
Sep 19 18:44:16 ledonnemannor qm[2311820]: <root@pam> end task UPID:ledonnemannor:00234692:0492C108:650A2440:qmreset:201:root@pam: OK
Sep 19 18:46:46 ledonnemannor pvedaemon[2267838]: worker exit
Sep 19 18:46:46 ledonnemannor pvedaemon[1016]: worker 2267838 finished
Sep 19 18:46:46 ledonnemannor pvedaemon[1016]: starting 1 worker(s)
Sep 19 18:46:46 ledonnemannor pvedaemon[1016]: worker 2312944 started
Sep 19 18:49:46 ledonnemannor qm[2314408]: <root@pam> starting task UPID:ledonnemannor:002350AD:049341D6:650A258A:qmreset:201:root@pam:
Sep 19 18:49:46 ledonnemannor qm[2314408]: <root@pam> end task UPID:ledonnemannor:002350AD:049341D6:650A258A:qmreset:201:root@pam: OK
Sep 19 18:51:45 ledonnemannor pveproxy[2303335]: worker exit
Sep 19 18:51:45 ledonnemannor pveproxy[1023]: worker 2303335 finished
Sep 19 18:51:45 ledonnemannor pveproxy[1023]: starting 1 worker(s)
Sep 19 18:51:45 ledonnemannor pveproxy[1023]: worker 2315263 started
Sep 19 18:53:45 ledonnemannor pveproxy[2302768]: worker exit
Sep 19 18:53:45 ledonnemannor pveproxy[1023]: worker 2302768 finished
Sep 19 18:53:45 ledonnemannor pveproxy[1023]: starting 1 worker(s)
Sep 19 18:53:45 ledonnemannor pveproxy[1023]: worker 2316155 started
Sep 19 18:55:16 ledonnemannor qm[2316957]: <root@pam> starting task UPID:ledonnemannor:00235AB3:0493C2A8:650A26D3:qmreset:201:root@pam:
Sep 19 18:55:16 ledonnemannor qm[2316957]: <root@pam> end task UPID:ledonnemannor:00235AB3:0493C2A8:650A26D3:qmreset:201:root@pam: OK
Sep 19 18:57:52 ledonnemannor pvedaemon[2279488]: <root@pam> successful auth for user 'root@pam'
Sep 19 19:00:45 ledonnemannor qm[2319520]: <root@pam> starting task UPID:ledonnemannor:002364A1:04944378:650A281D:qmreset:201:root@pam:
Sep 19 19:00:45 ledonnemannor qm[2319520]: <root@pam> end task UPID:ledonnemannor:002364A1:04944378:650A281D:qmreset:201:root@pam: OK
Sep 19 19:00:52 ledonnemannor pvedaemon[2279488]: worker exit
Sep 19 19:00:52 ledonnemannor pvedaemon[1016]: worker 2279488 finished
Sep 19 19:00:52 ledonnemannor pvedaemon[1016]: starting 1 worker(s)
Sep 19 19:00:52 ledonnemannor pvedaemon[1016]: worker 2319592 started
 
Have you checked the corn job in your Proxmox VE server, if there is anything, do the qm restart? Can you also please attach the `dmesg` output? Additionally, I would check the memory usage on your Proxmox VE server.
 
Have you checked the corn job in your Proxmox VE server, if there is anything, do the qm restart? Can you also please attach the `dmesg` output? Additionally, I would check the memory usage on your Proxmox VE server.
sorry to be a noob. is there a page on howto where to check cron job? also dmesg outputs everything. wierd thing is about two days ago the VM stoped and now it wont start. really wierd stuff.
 
Spot on. Thanks @TDavLinguist - i completely forgot that i tested this feature out recently. removed Monitor-All helper script and no more resets. Phew!
I totally forgot that installed this script! thank you!
Seems you're not the only one. Maybe Proxmox should have a FAQ about this: if your VM resets in 5-6 minutes, try to remember if you installed monitoring tools that reset your VM every 5-6 minutes.
 
Seems you're not the only one. Maybe Proxmox should have a FAQ about this: if your VM resets in 5-6 minutes, try to remember if you installed monitoring tools that reset your VM every 5-6 minutes.

I mean, the author states on the page:
Virtual machines without the QEMU guest agent installed must be excluded.
and:
Prior to generating any new CT/VM not found in this repository, it's necessary to halt Proxmox VE Monitor-All by running systemctl stop ping-instances.
But it is also nothing new, that people do not (completely) read things or understand them and do not care further.

Imho, this script is kinda a newbie trap...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!