Unable to start vm (can't deactivate LV ... Logical volume ... in use)

livpet

Member
Jul 10, 2019
5
0
21
60
Hi,

I am unable to start a linux vm after I increase the RAM. The Proxmox version is 6.0-9. This is the output of the start command, and any help is greatly appreciated:

qm start 110
/dev/sdd: open failed: No medium found
/dev/sdd: open failed: No medium found
/dev/sdd: open failed: No medium found
/dev/sdd: open failed: No medium found
can't deactivate LV '/dev/local-storage-ssd/vm-110-disk-0': Logical volume local-storage-ssd/vm-110-disk-0 in use.
start failed: command '/usr/bin/kvm -id 110 -name HOSTING-8 -chardev 'socket,id=qmp,path=/var/run/qemu-server/110.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/110.pid -daemonize -smbios 'type=1,uuid=1d807ec2-0ffa-438f-8941-d9e1514c6436' -smp '20,sockets=2,cores=10,maxcpus=20' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc unix:/var/run/qemu-server/110.vnc,password -cpu 'Broadwell,+kvm_pv_unhalt,+kvm_pv_eoi,enforce,vendor=GenuineIntel' -m 8192 -object 'memory-backend-ram,id=ram-node0,size=4096M' -numa 'node,nodeid=0,cpus=0-9,memdev=ram-node0' -object 'memory-backend-ram,id=ram-node1,size=4096M' -numa 'node,nodeid=1,cpus=10-19,memdev=ram-node1' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.3,chassis_nr=3,bus=pci.0,addr=0x5' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'vfio-pci,host=82:10.4,id=hostpci0,bus=pci.0,addr=0x10' -device 'VGA,id=vga,bus=pci.0,addr=0x2' -chardev 'socket,path=/var/run/qemu-server/110.qga,server,nowait,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:7b5e9eee313' -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -device 'virtio-scsi-pci,id=virtioscsi0,bus=pci.3,addr=0x1' -drive 'file=/dev/pve/vm-110-disk-0,if=none,id=drive-scsi0,cache=writeback,format=raw,aio=threads,detect-zeroes=on' -device 'scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -device 'virtio-scsi-pci,id=virtioscsi1,bus=pci.3,addr=0x2' -drive 'file=/dev/local-storage-ssd/vm-110-disk-0,if=none,id=drive-scsi1,cache=writeback,format=raw,aio=threads,detect-zeroes=on' -device 'scsi-hd,bus=virtioscsi1.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1' -machine 'type=pc'' failed: got timeout


Thank you,
Liviu
 
Hi,
Unfortunately I did not find an explanation. In the end the vm started only after I stopped one of vm's.
 
Hello,
i have the same problem, i have 4 VM all with their own LV disk, but i can only have 3 simultaneously started... if i try to start the last one, i have :


can't deactivate LV '/dev/VMOS/vm-302-disk-1': Logical volume VMOS/vm-302-disk-1 in use.
can't deactivate LV '/dev/VMOS/vm-302-disk-0': Logical volume VMOS/vm-302-disk-0 in use.

And the start task is failing until i stop another VM.
i've tried to deactivate LV manually using :

lvchange -an -f --verbose VMOS/vm-302-disk-1
lvchange -an -f --verbose VMOS/vm-302-disk-0

but i still get the same error when i start the 4th VM
I have not found any other mention of this issue on this forum or anywhere on the web...
 
Last edited:
i made a service running a script that lock suspend every vm that is stopped, so the VMs can start without trouble :

create the script :

Code:
nano /root/vm_autolock.py


vm_autolock.py :

Python:
#!/usr/bin/env python3

import subprocess
import time

# Temps d'attente après le démarrage du script (en secondes)
START_DELAY = 120

# Intervalle de vérification des VMs (en secondes)
CHECK_INTERVAL = 300

# Fonction pour vérifier et verrouiller les VMs arrêtées mais non verrouillées en mod>
def check_vms():
    try:
        # Récupérer la liste des VMs arrêtées mais non verrouillées en mode suspendu
        vmids = subprocess.check_output("qm list | awk '$3 == \\"stopped\\" && system(\\>

        # Vérifier et verrouiller chaque VM
        for vmid in vmids:
            cmd = f"qm set {vmid} --lock suspended"
            subprocess.run(cmd, shell=True, check=True)
            print(f"VM {vmid} verrouillée en mode suspendu")

    except subprocess.CalledProcessError as e:
        print(f"Erreur lors de l'exécution de la commande : {e}")

# Fonction principale
def main():
    # Attendre le temps de démarrage initial
    print(f"Attente de {START_DELAY} secondes...")
    time.sleep(START_DELAY)

    # Boucle principale
    while True:
        print("Vérification des VMs...")
        check_vms()
        print(f"Attente de {CHECK_INTERVAL} secondes...")
        time.sleep(CHECK_INTERVAL)

if __name__ == "__main__":
    main()


create the service :

Code:
nano /etc/systemd/system/vmautolock.service



vmautolock.service :

Code:
[Unit]
Description=My script to autolock VMS

[Service]
User=root
WorkingDirectory=/root/
ExecStart=/usr/bin/python3 /root/vm_autolock.py
Restart=always
RestartSec=10
StartLimitInterval=0

[Install]
WantedBy=multi-user.target


Activate the service :

Code:
sudo systemctl enable vmautolock.service
sudo systemctl start vmautolock.service


sorry comments are in french... but maybe this will help someone
 
Hi, interesting workaround! It would be interesting to find out why setting the suspended lock helps. I had a quick look at the source and it seems this workaround may, depending on the VM config, increase the timeout (the time PVE waits for the VM to start up) from 30 seconds to 5 minutes. Maybe a timeout of 30s is a bit short for your setup -- could you try not setting the lock, but starting the VM with qm start VMID --timeout 300 instead?
 
  • Like
Reactions: shabsta
wow ! it worked ! thank's a lot ! did you check at the source code ? It looks like it's a proble with VM that use PCI Passthrough, wich is my case for GPUs
 
Last edited:
Now i just need to find a way to chand the default timeout to 300, in order to use the webGUI
 
Glad to hear it works!

Now i just need to find a way to chand the default timeout to 300, in order to use the webGUI
Unfortunately, there currently is no option to set a default timeout. There is an open feature request with a patch [1], but the patch hasn't been applied yet.

Do I understand correctly that this issue only occurs for one VM that has PCI passthrough enabled, or does it also happen for other VMs?
Could you please show the VM config (the output of qm config VMID)?

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=3502
 
Hello, yes i observed this behaviour only on complex VMs with GPU Passthrough, and only when i start more than one of these, here is a config of one of this VMs :


Code:
root@crn-h1:~# qm config 302
agent: 1
args: -cpu host,-hypervisor
audio0: device=ich9-intel-hda,driver=none
balloon: 0
bios: ovmf
boot: order=scsi0
cores: 20
cpu: host
efidisk0: VMOS:vm-302-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:c4:00,pcie=1,x-vga=1
ide2: local:iso/virtio-win-0.1.229.iso,media=cdrom,size=522284K
lock: suspended
machine: pc-q35-7.1
memory: 50000
meta: creation-qemu=7.1.0,ctime=1674207720
name: CRN-VM2
net0: virtio=6A:06:97:D4:F5:AE,bridge=vmbr0,firewall=1
net1: virtio=76:A8:EE:3E:E4:F5,bridge=vmbr1,firewall=1,mtu=9014
numa: 1
ostype: win10
scsi0: VMOS:vm-302-disk-1,cache=writeback,iothread=1,size=200G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=6d57d975-20b7-4367-8368-90f815289275
sockets: 1
tablet: 0
vga: virtio
vmgenid: 45b8dd2d-317d-431b-9a09-e8235014e8d1
root@crn-h1:~#
 
  • Like
Reactions: fweber
Hi, interesting workaround! It would be interesting to find out why setting the suspended lock helps. I had a quick look at the source and it seems this workaround may, depending on the VM config, increase the timeout (the time PVE waits for the VM to start up) from 30 seconds to 5 minutes. Maybe a timeout of 30s is a bit short for your setup -- could you try not setting the lock, but starting the VM with qm start VMID --timeout 300 instead?
Came here to say thanks as qm start VMID --timeout 300 worked in a similar scenario
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!