Inhibit All VM's from starting at boot for troubleshooting purposes... Possible???

Jan 23, 2021
51
17
13
Hi all,

I am doing some hardware troubleshooting on my server meaning I need to startup/shutdown a fair bit. I am running about 60 VM's and they are all set to start at boot. Is there a way I can global inhibit all VM's from starting until I'm done with my troubleshooting? Appreciate I could go through each VM and uncheck the option but there are 60 of them. Or is there perhaps a command I can issue to set that option in bulk?

Thanks,

FS
 
Hi,
You can replace the value for the "onboot" param in all the conf files at once with sed.
Here is an example. I didn't test it, so please use with care and try it carefully before using and/or backup your conf files...

Code:
# Disable autoboot for all CT
sed -i -e "s/^onboot: 1/onboot: 0/g" /etc/pve/lxc/*.conf ; 

# Disable autoboot for all VM
sed -i -e "s/^onboot: 1/onboot: 0/g" /etc/pve/qemu-server/*.conf

And for reverting and setting all CT AND VM to autoboot

Code:
# Set autoboot for all CT
sed -i -e "s/^onboot: 0/onboot: 1/g" /etc/pve/lxc/*.conf ; 

# Set autoboot for all VM
sed -i -e "s/^onboot: 0/onboot: 1/g" /etc/pve/qemu-server/*.conf
 
And here is a much cleaner / proper solution (also to be tested), using pct/qm command

Code:
# Disable autoboot for all CT
for ct in $(pct list | awk '{print $1}' | grep -Eo '[0-9]{1,3}'); do pct set $ct --onboot 0; done

# Disable autoboot for all VM
for vm in $(qm list | awk '{print $1}' | grep -Eo '[0-9]{1,3}'); do qm set $vm --onboot 0; done

# Set autoboot for all CT
for ct in $(pct list | awk '{print $1}' | grep -Eo '[0-9]{1,3}'); do pct set $ct --onboot 1; done

# Set autoboot for all VM
for vm in $(qm list | awk '{print $1}' | grep -Eo '[0-9]{1,3}'); do qm set $vm --onboot 1; done
 
I believe the second option is more future-proof but both work fine.
Maybe you could mark the thread as "solved". You just have to edit your first post and change the flag before the title to "solved"
 
What about filtering out vms that don't have autostart enabled on boot? I really hope the proxmox team provides a maintenance mode at some point. Still missing that for many reason.
 
A option to temporarily disable VM-/LXC-autostart node-wide would really be helpful!
 
The easiest solution is to disable the storage. This can easily be done via the GUI or via

Code:
pvesm set -disable 0 <storagename>
 
What makes you say that? Yes if clustered, we use that all the time.
Doesn't this command disable the storage for all nodes in the cluster? Hence all nodes that should keep running have the storage removed/disabled.
 
Doesn't this command disable the storage for all nodes in the cluster? Hence all nodes that should keep running have the storage removed/disabled.
Yes, it does and I thought that was the intent.
Althought the OP does not talk about a cluster, only about "all VMs" so thank you for bringing this up.

So the sed solution is for one host and the storage solution is for the whole cluster.
 
And here is a much cleaner / proper solution (also to be tested), using pct/qm command

I've written a maintenance mode script!

I have a mix of VMs, some of which are set to start at boot, others which aren't. So I expanded on jf2021's one-liner so I can query each VM, check the onboot: setting, save a list of VMs with onboot: 1 to a file, and disable them. Now I can reboot to my heart's content and no VMs will start. Once maintenance is complete, running the script again will parse the file it created and re-enable onboot: only for the ones that were previously set.

I put it up on Github as well: https://github.com/Darkhand81/ProxmoxMaintenanceMode

Bash:
#!/bin/bash

# ---------------------------------------------------------
#                 Proxmox Maintenance Mode
#  -Darkhand81
#  https://github.com/Darkhand81/ProxmoxMaintenanceMode
#  Temporarily disable all VMs/CTs that are set to autoboot
#  at node startup to allow for node maintenance.
#
# ---------------------------------------------------------

# ---BEGIN---

# Require root
if [[ $EUID -ne 0 ]]; then
    echo "$0 is not running as root. Try using sudo."
    exit 2
fi

# Pathname of the lockfile that signals the script that we are in
# maintenance mode.  Contains the VMs/CTs that were disabled so they
# can be re-enabled later:
lockfile="/root/maintmode.lock"

# ---------
# Functions
# ---------

# Enable maintenance mode - Query all instances, check which are set to
# start at boot, record and disable them.
function enable_maintmode(){

  echo "Disabling (and saving) current onboot settings:"

  # List all VMs, filter only the first word, then filter only numerics (IDs):
  for vm in $(qm list | awk '{print $1}' | grep -Eo '[0-9]{1,5}')
  do
    # Of those, query each VMID and search for those with onboot: enabled:
    for vmstatus in $(qm config $vm | grep "onboot: 1" | awk '{print $2}')
    do
      #Save matching IDs to the lockfile, prepend with VM to identify as a VM:
      echo "VM$vm" >> $lockfile
      # Disable onboot for matching VMIDs:
      qm set $vm -onboot 0
    done
  done

  # Repeat for CTs as they use a different command to enable/disable:
  for ct in $(pct list | awk '{print $1}' | grep -Eo '[0-9]{1,5}')
  do
    for ctstatus in $(pct config $ct | grep "onboot: 1" | awk '{print $2}')
    do
      # Prepend with CT to identify as a container:
      echo "CT$ct" >> $lockfile
      # Disable onboot for matching containers:
      pct set $ct -onboot 0
      # pct currently doesn't provide an output like qm does, so simulate it here:
      echo "update CT $ct: -onboot 0"
    done
  done
}

# Disable maintenance mode - Parse the lockfile and re-enable onboot
# for those IDs:
function disable_maintmode(){

  file=$(cat $lockfile)
  echo -e "\nRe-enabling previous onboot settings:"

  for line in $file
  do
    # For each line starting with VM, run the qm command to enable VM onboot:
    for vm_on in $(echo -e "$line" | grep 'VM' | cut -c 3-)
    do
      qm set $vm_on -onboot 1
    done
  done

  for line in $file
  do
    # For each line starting with CT, run the pct command for CTs:
    for ct_on in $(echo -e "$line" | grep 'CT' | cut -c 3-)
    do
      pct set $ct_on -onboot 1
      # pct currently doesn't provide an output like qm does, so simulate it here:
      echo "update CT $ct_on: -onboot 1"
    done
  done

  # Remove the lockfile as we want to signal that we are out of maintenance mode:
  rm $lockfile
}

# -----
# Start
# -----

# If the lockfile doesn't exist, we want to enable maintenance mode (disable onboot).
# Otherwise we want to disable maintenance mode (enable onboot):
if [ ! -f "$lockfile" ]; then
  echo
  read -p "Enable maintenance mode and disable all current VM/CT bootup? (y/n) " CONT
    if [ "$CONT" = "y" ]; then
      enable_maintmode;
      echo -e "\nMaintenance mode is now enabled! VM autostart is disabled. Run this script again to re-enable."
    else
      echo "Exiting.";
      exit
    fi
else
  echo
  read -p "Maintenance mode is on! Re-enable previous VM/CT bootup? (y/n) " CONT
    if [ "$CONT" = "y" ]; then
      disable_maintmode
      echo -e "\nMaintenance mode is now disabled! All VMs/CTs that were previously set to autorun will do so at next bootup."
    else
      echo "Exiting.";
      exit
    fi
fi
 
Last edited:
There should really be a official maintenance mode in Proxmox, how can something so basic be missing...
 
I've written a maintenance mode script!

I have a mix of VMs, some of which are set to start at boot, others which aren't. So I expanded on jf2021's one-liner so I can query each VM, check the onboot: setting, save a list of VMs with onboot: 1 to a file, and disable them. Now I can reboot to my heart's content and no VMs will start. Once maintenance is complete, running the script again will parse the file it created and re-enable onboot: only for the ones that were previously set.

I put it up on Github as well: https://github.com/Darkhand81/ProxmoxMaintenanceMode
I know it's an older thread but wanted to thank you so much for this, super useful! Unfortunate that such a simple thing is not included in Proxmox already.
 
I know it's an older thread but wanted to thank you so much for this, super useful! Unfortunate that such a simple thing is not included in Proxmox already.
I’ll give you one simpler solution. I don’t know where I‘ve got it from but I wrote it down:



Independent of the feature request I'd propose a ways simpler workaround for you:

Simply mask the service that is responsible for starting the on-boot marked guests. IOW, before the reboot for maintenance execute:

systemctl mask pve-guests.service

Then after all maintenance is done

systemctl unmask pve-guests.service

As that would be a simple and already available solution, I'd be inclined to avoid the small, but not negligible, additional complexity of implementing a special boolean setting for this, would that be acceptable for you?
 
  • Like
Reactions: Golum

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!