[SOLVED] Locking and unlocking containers

Waterfin

New Member
Feb 11, 2021
6
1
1
32
Can somebody please explain the details about the pct set <vmid> --lock and the pct unlock commands?

The man pages say nothing about this config setting other than "Lock/unlock the VM.". There is a list of possible lock types (backup, create, destroyed, disk, fstrim, migrate, mounted, rollback, snapshot, snapshot-delete), but no description about what these lock types mean or what they do. What does each one of these locks do, and when should you use each type?

The pct unlock command doesn't take any arguments and appears to just remove any lock that is currently on the VM. Does this mean that there can be only one lock type applied to a VM at any one time? The only description for this command is "Unlock the VM." - quite unhelpful and redundant considering the subcommand is called "unlock".

Also, why does the description for pct set <vmid> --lock say "Lock/unlock the VM."? As far as I can tell, you cannot use this command to unlock a VM, only to add a lock to it.
 
for several actions in pve (the listed ones, e.g. backup/create/etc.) pve locks the vm config so that no two operations interfere with each other
since that lock itself is part of the config file it shows up in the autogenerated man page/docs

'unlock' is a special implemented command, so that a leftover lock (e.g. after a host crash during such a lock) can be removed again by an admin

generally you do not need/want to lock a vm/ct, nor should you normally need to unlock a vm/ct
 
  • Like
Reactions: iprowell
I was hoping to use the locks to help prevent accidental deletion of a vm/ct.

After doing some testing, it would appear that if the vm has any lock on it at all, you just can't change any config settings on it or even start/stop it. For example if I add a "backup" lock on a ct, I cannot destroy it, start it, stop it, or do anything with it until I remove the lock. Why have all of these different types of locks when any one being active means preventing doing any action on the vm entirely?
 
I was hoping to use the locks to help prevent accidental deletion of a vm/ct.

After doing some testing, it would appear that if the vm has any lock on it at all, you just can't change any config settings on it or even start/stop it. For example if I add a "backup" lock on a ct, I cannot destroy it, start it, stop it, or do anything with it until I remove the lock. Why have all of these different types of locks when any one being active means preventing doing any action on the vm entirely?

because those locks are used internally to block other operations WHILE a task is running that needs exclusive access. that task will then release the lock at the end.

what you are looking for is "protection" - it exactly and only prevents deletion, and is entirely unrelated to locks ;)
 
Yes, the "protection" config flag is what I want.

I was going down the wrong path after seeing the "lock" config. Perhaps it should say in the man page that pct set <vmid> --lock is meant for internal use only and should not be used by the end user. If it had said that it would have saved me a lot of trouble here.
 
  • Like
Reactions: jobine23
Sorry for opening an older thread.

I need to reapply a create lock because I am trying to restore a large VM from a slower PBS server. the VM conf on the backup has the wrong outdated pcie address since that server has had its cards moved around. Try as I might the damn thing will not let me change that and its not possible to stop and restart after updating the hardware section anyways. I have tried sshing in and Ctrl+s-ing the correct config as fast as possible but it doesn't care and pulls its backed up one anyways. I'm at a loss for why this is so dumb and difficult. I have found that the kill -STOP <pid> command, mv lock-100.conf to lock-100.conf.temp allows me to run qm unlock 100 which lets me stop and start the VM applying the changes, but when I put the lock-100 file back and run kill -CONT <pid> the restore process gets angry that the create lock isn't there and nukes everything. Lock on proxmox is really unintuitive and gets in the way of doing almost anything advanced. Is there a way around this problem? The only reason I care is because this VM is 12TB restoring from an HDD based PBS server, so live restore it is. Once the VM boots and stabilizes its great... except that my GPU is missing.

EDIT: Just like most things, persistence can fix anything.

Made a script that takes /tmp/100.conf and spams it to the /etc/pve/qemu-server/100.conf file with the exact correct config I was after. Started the script, then started the live restore. It is now booting with the correct PCIe device.
Bash:
#!/bin/bash

# Source and destination paths
SOURCE_CONF="/tmp/100.conf"
DEST_CONF="/etc/pve/qemu-server/100.conf"

# Check if the source file exists
if [ -f "$SOURCE_CONF" ]; then
    echo "Starting to spam update the VM configuration..."
    # Infinite loop to continuously copy the file
    while true; do
        cp -f "$SOURCE_CONF" "$DEST_CONF"
        sleep 0.1  # Sleep for 0.1 seconds between copies to avoid 100% CPU usage
    done
else
    echo "Error: $SOURCE_CONF does not exist!"
fi

It's dumb, but if it works it isn't dumb.
 
Last edited:
  • Like
Reactions: jefff1979
you seem to have fixed or worked around your issue, but for future reference: switching to proper PCI mappings avoids this issue (since you can point the entry on the target system to match the hardware, and don't have to encode PCI slots/.. in the VM config).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!