Task - Start all VM's and Containers

Luigical

Member
May 4, 2022
4
1
8
Hey, my server has been up for about 24 hours, I have not begun to configure anything yet, however, I notice that every once in a while the task of starting all VMSs and Containers runs. Does it seem to just run at random times? Or does that mean my machine has been rebooting itself?
My uptime is only an hour and a half, which shouldn't be correct.

http://luigical.me/images/Screenshot_05_04_2WHsO2oroV.png
 
Hi,
please check your /var/log/syslog. The most relevant messages should be right before the reboot happens. Is the node part of a cluster?
 
hello,

I have the same issue, but i noticed that this happen when i log in to my server from web, the task of "bulk start all vms and containers" starts and then when i try to start a VM the server it's rebooted.

1704908087699.png

I hope someone can guide us with the solution to this problem.

thanks in advance.
 
Hi,
hello,

I have the same issue, but i noticed that this happen when i log in to my server from web, the task of "bulk start all vms and containers" starts and then when i try to start a VM the server it's rebooted.
please share the output of pveversion -v, the VM configuration qm config <ID> and journalctl -b-1 after such a reboot. What physcial CPU do you have?

The task in question will auto-start guests for which the Start on Boot option is enabled. If you don't have any such guests, it will do nothing. But since you have trouble starting guests I would not set it right now, because you might end up with a boot loop!
 
Hi, thanks for your assistance, i attach the required info:

currently this is a test environment

pveversion:
proxmox-ve: 8.1.0 (running kernel: 6.5.11-7-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-8
proxmox-kernel-6.5: 6.5.11-7
proxmox-kernel-6.5.11-7-pve-signed: 6.5.11-7
proxmox-kernel-6.5.11-6-pve-signed: 6.5.11-6
pve-kernel-5.15.131-1-pve: 5.15.131-2
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.2-1
proxmox-backup-file-restore: 3.1.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.3
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-2
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.5
pve-qemu-kvm: 8.1.2-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve1

qm config 100:
1704990413219.png

qm config 103
1704990379395.png

the rest of the VM's have similar configuration and not setting Start at boot:


the journal is the attached file.


the physical cpu:

Intel(R) Core(TM) i3-8100T CPU @ 3.10GHz 4 CORES
rest of config of the "server"
RAM 12GB
SSD 512GB.

I tried to start 2 virtual machines, and then a boot loop started, I was able to stop it by stopping the pvescheduler.service service.

thanks in advance for your assistance.
 

Attachments

Are you sure this is the full journalctl log? It doesn't even show any qmstart task nor anything about pve services nor anything related to shutdown.
 
I have the same issue
 

Attachments

  • Snipaste_2024-01-30_10-30-16.jpg
    Snipaste_2024-01-30_10-30-16.jpg
    619.2 KB · Views: 13
  • Snipaste_2024-01-30_10-29-57.jpg
    Snipaste_2024-01-30_10-29-57.jpg
    744 KB · Views: 12
Seems like your server crashed:
Code:
Jan 09 11:31:36 pve4 kernel: eth0: renamed from veth606ec21
Jan 09 11:31:36 pve4 kernel: br-16e55c9b97c4: port 1(vethc51abce) entered blocking state
Jan 09 11:31:36 pve4 kernel: br-16e55c9b97c4: port 1(vethc51abce) entered forwarding state
Jan 09 11:31:36 pve4 kernel: br-16e55c9b97c4: port 1(vethc51abce) entered disabled state
Jan 09 11:31:36 pve4 kernel: veth606ec21: renamed from eth0
Jan 09 11:31:36 pve4 kernel: br-16e55c9b97c4: port 1(vethc51abce) entered disabled state
Jan 09 11:31:36 pve4 kernel: vethc51abce (unregistering): left allmulticast mode
Jan 09 11:31:36 pve4 kernel: vethc51abce (unregistering): left promiscuous mode
Jan 09 11:31:36 pve4 kernel: br-16e55c9b97c4: port 1(vethc51abce) entered disabled state
-- Boot 90b574ea977949968c29f93239d74b4b --
Jan 09 11:37:13 pve4 kernel: Linux version 6.8.12-5-pve (build@proxmox) (gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-5 (2024-12-03T10:26Z) ()
Jan 09 11:37:13 pve4 kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-6.8.12-5-pve root=/dev/mapper/pve-root ro quiet
Jan 09 11:37:13 pve4 kernel: KERNEL supported cpus:
Jan 09 11:37:13 pve4 kernel:   Intel GenuineIntel
Jan 09 11:37:13 pve4 kernel:   AMD AuthenticAMD
Jan 09 11:37:13 pve4 kernel:   Hygon HygonGenuine
Jan 09 11:37:13 pve4 kernel:   Centaur CentaurHauls
Jan 09 11:37:13 pve4 kernel:   zhaoxin   Shanghai
but there is nothing noticeable in the log before that happened. You could try connecting via SSH from another machine and monitoring journalctl -f there. If you are lucky, you get more information in the output there during the next time the crash happens.

I'd recommend giving the opt-in 6.11 kernel a try: https://forum.proxmox.com/threads/o...ve-8-available-on-test-no-subscription.156818
As well as making sure the latest BIOS/firmware upgrades are installed: https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_firmware_cpu

Please also share the output of lscpu.