Stopped start failed: QEMU exited with code 1

Hi,
I have this problem and no start
kvm: -drive file=/dev/pve/vm-102-disk-0,if=none,id=drive-scsi1,format=raw,cache=none,aio=io_uring,detect-zeroes=on: Could not open '/dev/pve/vm-102-disk-0': No such file or directory
TASK ERROR: start failed: QEMU exited with code 1

Can anybody help me?
please post the output of the following commands:
Code:
pveversion -v
qm config 102
qm showcmd 102 --pretty
lvs
 
Hi,

please post the output of the following commands:
Code:
pveversion -v
qm config 102
qm showcmd 102 --pretty
lvs

Hey, today (25.01.2023) I went ahead and updated my node/my proxmox after which I shut it down for a few minutes. After restarting it, my VM102 on which my TrueNAS Scale is running wouldn't boot and is exiting with the "QEMU exited with code 1" error message. Since all my family photos are stored on this Vm I would be extremely grateful if you could help me solve this problem.
I've read the chat history so far, but I didn´t really understand what to do. However, it seems to be important whether I use GRUB, which I can hereby confirm.

Code:
kvm: -drive file=/dev/disk/by-id/wwn-0x5000c500e60216a8,if=none,id=drive-scsi1,format=raw,cache=none,aio=io_uring,detect-zeroes=on: Could not open '/dev/disk/by-id/wwn-0x5000c500e60216a8': No such file or directory
TASK ERROR: start failed: QEMU exited with code 1

output of the following commands:
pveversion -v:
Code:
proxmox-ve: 7.2-1 (running kernel: 5.15.30-2-pve)
pve-manager: 7.2-3 (running version: 7.2-3/c743d6c1)
pve-kernel-helper: 7.2-2
pve-kernel-5.15: 7.2-1
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-8
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-6
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.2-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.1.8-1
proxmox-backup-file-restore: 2.1.8-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-10
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-1
pve-ha-manager: 3.3-4
pve-i18n: 2.7-1
pve-qemu-kvm: 6.2.0-5
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1

qm config 102:
Code:
boot: order=scsi0
cores: 6
memory: 10240
meta: creation-qemu=6.2.0,ctime=1670083928
name: TrueNAS
net0: virtio=0E:E0:98:3F:8D:39,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-102-disk-0,size=20G,ssd=1
scsi1: /dev/disk/by-id/wwn-0x5000c500e60216a8,size=7814026584K
scsi2: /dev/disk/by-id/wwn-0x5000c500e60141ec,size=7814026584K
scsi3: /dev/disk/by-id/wwn-0x5000c500e601c85b,size=7814026584K
scsihw: virtio-scsi-pci
smbios1: uuid=6d4faa8b-af3a-4f40-8321-f192cdcc0549
sockets: 1
vmgenid: ac42dea7-d3fc-4c73-9b8f-b0da859c9def

qm showcmd 102 --pretty
Code:
/usr/bin/kvm \
  -id 102 \
  -name TrueNAS \
  -no-shutdown \
  -chardev 'socket,id=qmp,path=/var/run/qemu-server/102.qmp,server=on,wait=off' \
  -mon 'chardev=qmp,mode=control' \
  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
  -mon 'chardev=qmp-event,mode=control' \
  -pidfile /var/run/qemu-server/102.pid \
  -daemonize \
  -smbios 'type=1,uuid=6d4faa8b-af3a-4f40-8321-f192cdcc0549' \
  -smp '6,sockets=1,cores=6,maxcpus=6' \
  -nodefaults \
  -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
  -vnc 'unix:/var/run/qemu-server/102.vnc,password=on' \
  -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
  -m 10240 \
  -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
  -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
  -device 'vmgenid,guid=ac42dea7-d3fc-4c73-9b8f-b0da859c9def' \
  -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
  -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
  -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
  -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:6c6f19a981f' \
  -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
  -drive 'file=/dev/pve/vm-102-disk-0,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on' \
  -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,rotation_rate=1,bootindex=100' \
  -drive 'file=/dev/disk/by-id/wwn-0x5000c500e60216a8,if=none,id=drive-scsi1,format=raw,cache=none,aio=io_uring,detect-zeroes=on' \
  -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1' \
  -drive 'file=/dev/disk/by-id/wwn-0x5000c500e60141ec,if=none,id=drive-scsi2,format=raw,cache=none,aio=io_uring,detect-zeroes=on' \
  -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi2,id=scsi2' \
  -drive 'file=/dev/disk/by-id/wwn-0x5000c500e601c85b,if=none,id=drive-scsi3,format=raw,cache=none,aio=io_uring,detect-zeroes=on' \
  -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi3,id=scsi3' \
  -netdev 'type=tap,id=net0,ifname=tap102i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' \
  -device 'virtio-net-pci,mac=0E:E0:98:3F:8D:39,netdev=net0,bus=pci.0,addr=0x12,id=net0' \
  -machine 'type=pc+pve0'

lvs
Code:
LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- 339.33g             61.96  2.56                           
  root          pve -wi-ao----  96.00g                                                   
  swap          pve -wi-ao----   7.00g                                                   
  vm-101-disk-0 pve Vwi-a-tz--   5.00g data        99.46                                 
  vm-102-disk-0 pve Vwi-a-tz--  20.00g data        24.21                                 
  vm-103-disk-1 pve Vwi-aotz--  16.00g data        88.83                                 
  vm-104-disk-0 pve Vwi-a-tz--   8.00g data        97.17                                 
  vm-105-disk-0 pve Vwi-a-tz--   4.00m data        14.06                                 
  vm-105-disk-1 pve Vwi-a-tz-- 150.00g data        54.52                                 
  vm-105-disk-2 pve Vwi-a-tz--   4.00m data        1.56                                   
  vm-106-disk-0 pve Vwi-a-tz--  25.00g data        94.10                                 
  vm-108-disk-0 pve Vwi-a-tz--  50.00g data        99.17                                 
  vm-109-disk-0 pve Vwi-a-tz--  12.00g data        25.90                                 
  vm-110-disk-0 pve Vwi-a-tz--  40.00g data        44.66                                 
  vm-111-disk-0 pve Vwi-a-tz--   8.00g data        18.86                                 
  vm-200-disk-0 pve Vwi-a-tz--   8.00g data        13.26
 
Hi,
Hey, today (25.01.2023) I went ahead and updated my node/my proxmox after which I shut it down for a few minutes. After restarting it, my VM102 on which my TrueNAS Scale is running wouldn't boot and is exiting with the "QEMU exited with code 1" error message. Since all my family photos are stored on this Vm I would be extremely grateful if you could help me solve this problem.
I've read the chat history so far, but I didn´t really understand what to do. However, it seems to be important whether I use GRUB, which I can hereby confirm.
did the upgrade include a kernel upgrade? If yes, I'd try booting an older kernel and see if that helps.

Code:
kvm: -drive file=/dev/disk/by-id/wwn-0x5000c500e60216a8,if=none,id=drive-scsi1,format=raw,cache=none,aio=io_uring,detect-zeroes=on: Could not open '/dev/disk/by-id/wwn-0x5000c500e60216a8': No such file or directory
TASK ERROR: start failed: QEMU exited with code 1
The disk seems to be missing. Can you check what disks are there with lsblk and ls -l /dev/disk/by-id. Maybe there's a message in /var/log/syslog during boot about the missing disk.
 
  • Like
Reactions: folley
Hey Fiona, by following your advice and looking at "ls -l /dev/disk/by-id" I just noticed that one of my hard drives is not being detected by proxmox. I solved that and everything is now running well now. Thank you very much and have a nice day ^^
 
  • Like
Reactions: fiona
Hey Fiona, by following your advice and looking at "ls -l /dev/disk/by-id" I just noticed that one of my hard drives is not being detected by proxmox. I solved that and everything is now running well now. Thank you very much and have a nice day ^^
Glad you were able to solve it :)
Can you share the solution for future users with the same issue stumbling upon this?
 
  • Like
Reactions: Parcival
Hi Parcival; I am new to Linux and proxmox. When I took over the work from the other guys, we found that we had the same problem as you. Could you help us to solve this problem? I appreciate that.
我當然會盡力
I try to run the command Fiona gives. But they showed some messages I couldn't understand. I don't know how to check whether the hard drives are being detected by proxmox or not.
did the upgrade include a kernel upgrade? If yes, I'd try booting an older kernel and see if that helps.


The disk seems to be missing. Can you check what disks are there with lsblk and ls -l /dev/disk/by-id. Maybe there's a message in /var/log/syslog during boot about the missing disk.
 

Attachments

  • code.png
    code.png
    90.2 KB · Views: 17
  • error.png
    error.png
    18.9 KB · Views: 14
  • proxmox.png
    proxmox.png
    83.6 KB · Views: 15
  • lvs.png
    lvs.png
    91.1 KB · Views: 15
Hello @teddy106913,
first of all I want to point out that the right command for accessing the syslog is:
Code:
nano /var/log/syslog

How many Disks are in your server and how many of them are assigned to the VM103 ?
 
Last edited:
  • Like
Reactions: teddy106913
Hi,
in the output of lvs, it can be seen that the logical volume for the VM (and the data thin pool as a whole) have not been activated. What does running vgchange -ay tell you? If it complains about pve/data_tmeta being active, please have a look here. Otherwise, please share the output it produces.
 
  • Like
Reactions: teddy106913
Hello @teddy106913,
first of all I want to point out that the right command for accessing the syslog is:
Code:
nano /var/log/syslog

How many Disks are in your server and how many of them are assigned to the VM103 ?
Dear Parcival, thanks for your reply. I put my screen in the attached file.

Hi,
in the output of lvs, it can be seen that the logical volume for the VM (and the data thin pool as a whole) have not been activated. What does running vgchange -ay tell you? If it complains about pve/data_tmeta being active, please have a look here. Otherwise, please share the output it produces.
Dear Fiona, thanks for your reply. I share the output when I run the command you gave.

root@pve:/# vgchange -ay
/dev/sdb: open failed: No medium found
Check of pool pve/data failed (status:1). Manual repair required!
2 logical volume(s) in volume group "pve" now active


Best regard,

I also show the Task History for vm-103. We close stop the system on Mar.3 because of electrical maintenance. When we wanted to restart on Mar. 7, it did not work.
 

Attachments

  • vm-103-disk.png
    vm-103-disk.png
    44.6 KB · Views: 11
  • vm-103 Task History.png
    vm-103 Task History.png
    167.6 KB · Views: 8
Last edited:
  • Like
Reactions: teddy106913
That sounds like the thin pool might've been corrupted. Depending on what exactly happened, it might be recoverable, but you might want to make a copy of the whole disk before attempting to do so. See the Quick Examples here: https://github.com/jthornber/thin-provisioning-tools
Dear Fiona, I am new to Linux and proxmox. I am afraid I will make mistakes when I repair this problem. Do you mind providing more detail about how to fix this problem? Such as step-by-step. Thank you very much for giving me this help.

Best regards,

Teddy
 
Hey, i'm not an expert like @fiona , but i would like to describe the problem i had, maybe it helps:
I can't say with complete certainty what the problem was for me, however it seems that I have a loose connection or a similar issue with my SATA connection. It is always good to make sure that your connections are seated right.
 
Dear Fiona, I am new to Linux and proxmox. I am afraid I will make mistakes when I repair this problem. Do you mind providing more detail about how to fix this problem? Such as step-by-step. Thank you very much for giving me this help.
Unfortunately, I haven't done such repairs myself either. There should be tutorials out there, and the first step is to make a backup/clone of the disk, so you can try again, if you do make a mistake.
 
Unfortunately, I haven't done such repairs myself either. There should be tutorials out there, and the first step is to make a backup/clone of the disk, so you can try again, if you do make a mistake.
Dear fiona, I used chatgpt to fix this problem. Thanks for your help.

I will provide the command which helps me to solve this problem. I hope this can help someone who met the same problem as me.



In my case, It's vm-103 and can't open. So you may try the first command



xfs_repair /dev/<volume_group>/<logical_volume>



<volume_group> usually is "pve", you can use the command lvs to check



<logical_volume> is "vm-103-disk-0" in my case



so the command will become

"xfs_repair /dev/pve/vm-103-disk-0"



If they show the information as follows



{xfs_repair /dev/pve/vm-103-disk-0
/dev/pve/vm-103-disk-0: No such file or directory
/dev/pve/vm-103-disk-0: No such file or directory

fatal error -- couldn't initialize XFS library}


You may try using the mapping with VM 103

the command as follows

"xfs_repair /dev/mapper/pve-vm--103--disk--0"



Before using this command, please check the directory /dev/mapper/ to have the mapping related to VM 103.

If they don't exist, the mapping with VM 103. please use

"vgchange -ay" to active them


After repairing the file, you can restart your VM.



That are all my steps for fixing my problem. I hope this can help other guys who have the same problem with this.

Finally, I would like to share with everyone that I am a completely inexperienced user of Proxmox and not familiar with Linux. I hope everyone can remain hopeful when facing difficulties, and actively discuss with others to gain different knowledge and experiences. In the end, I want to say that the changes brought by this new generation are truly astonishing. I sincerely appreciate the help from Fiona and Parcival.

Best regards,

Teddy
 
Dear fiona, I used chatgpt to fix this problem. Thanks for your help.

I will provide the command which helps me to solve this problem. I hope this can help someone who met the same problem as me.



In my case, It's vm-103 and can't open. So you may try the first command



xfs_repair /dev/<volume_group>/<logical_volume>



<volume_group> usually is "pve", you can use the command lvs to check



<logical_volume> is "vm-103-disk-0" in my case



so the command will become

"xfs_repair /dev/pve/vm-103-disk-0"



If they show the information as follows



{xfs_repair /dev/pve/vm-103-disk-0
/dev/pve/vm-103-disk-0: No such file or directory
/dev/pve/vm-103-disk-0: No such file or directory

fatal error -- couldn't initialize XFS library}


You may try using the mapping with VM 103

the command as follows

"xfs_repair /dev/mapper/pve-vm--103--disk--0"



Before using this command, please check the directory /dev/mapper/ to have the mapping related to VM 103.

If they don't exist, the mapping with VM 103. please use

"vgchange -ay" to active them


After repairing the file, you can restart your VM.



That are all my steps for fixing my problem. I hope this can help other guys who have the same problem with this.

Finally, I would like to share with everyone that I am a completely inexperienced user of Proxmox and not familiar with Linux. I hope everyone can remain hopeful when facing difficulties, and actively discuss with others to gain different knowledge and experiences. In the end, I want to say that the changes brought by this new generation are truly astonishing. I sincerely appreciate the help from Fiona and Parcival.

Best regards,

Teddy
xfs_repair has nothing to do with LVM-thin...As you can see from the error messages the commands didn't even do anything, because the file wasn't present. The real question is why did vgchange -ay work this time.
 
swtpm_setup: Not overwriting existing state file.
kvm: -device vfio-pci,host=0000:04:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on: Failed to mmap 0000:04:00.0 BAR 3. Performance may be slow
kvm: ../hw/pci/pci.c:1562: pci_irq_handler: Assertion `0 <= irq_num && irq_num < PCI_NUM_PINS' failed.
stopping swtpm instance (pid 3748) due to QEMU startup error
TASK ERROR: start failed: QEMU exited with code 1
 
swtpm_setup: Not overwriting existing state file.
kvm: -device vfio-pci,host=0000:04:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on: Failed to mmap 0000:04:00.0 BAR 3. Performance may be slow
kvm: ../hw/pci/pci.c:1562: pci_irq_handler: Assertion `0 <= irq_num && irq_num < PCI_NUM_PINS' failed.
stopping swtpm instance (pid 3748) due to QEMU startup error
TASK ERROR: start failed: QEMU exited with code 1
please open a new thread and provide (at least) the following details:

- pveversion -v
- VM config
- full task log
- system log covering the period of the start
- hardware details (chipset, PCI hardware that you try to pass through, ..)
 
Similar issue over here, newly installed proxmox, added an ubuntu ISO but VM 100 won't start. Any help would be really appreciated. Thanks in advance:

Code:
end task UPID:pve:0000376A:00063941:64CE51E8:vncproxy:100:root@pam: Failed to run vncproxy.
TASK ERROR: start failed: QEMU exited with code 1

proxmox-ve: 7.3-1 (running kernel: 5.15.74-1-pve)
pve-manager: 7.3-3 (running version: 7.3-3/c3928077)
pve-kernel-5.15: 7.2-14
pve-kernel-helper: 7.2-14
pve-kernel-5.15.74-1-pve: 5.15.74-1
ceph: 17.2.6-pve1
ceph-fuse: 17.2.6-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-8
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.2-12
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.7-1
proxmox-backup-file-restore: 2.2.7-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.3
pve-cluster: 7.3-1
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.5-6
pve-ha-manager: 3.5.1
pve-i18n: 2.8-1
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-1
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.6-pve1

boot: order=scsi0;ide2;net0
cores: 1
ide2: local:iso/focal-server-cloudimg-amd64.img,media=cdrom,size=2252M
memory: 512
meta: creation-qemu=7.1.0,ctime=1691242956
name: core
net0: virtio=86:77:A5:EB:62:2B,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-100-disk-0,iothread=1,size=5G
scsihw: virtio-scsi-single
smbios1: uuid=b9374f5f-aab9-40a2-a754-b1f556574520
sockets: 1
vmgenid: 3ec7b718-91c7-4dbc-8b64-49db404e5c1d

/usr/bin/kvm \
-id 100 \
-name 'core,debug-threads=on' \
-no-shutdown \
-chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server=on,wait=off' \
-mon 'chardev=qmp,mode=control' \
-chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
-mon 'chardev=qmp-event,mode=control' \
-pidfile /var/run/qemu-server/100.pid \
-daemonize \
-smbios 'type=1,uuid=b9374f5f-aab9-40a2-a754-b1f556574520' \
-smp '1,sockets=1,cores=1,maxcpus=1' \
-nodefaults \
-boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
-vnc 'unix:/var/run/qemu-server/100.vnc,password=on' \
-cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
-m 512 \
-object 'iothread,id=iothread-virtioscsi0' \
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
-device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
-device 'pci-bridge,id=pci.3,chassis_nr=3,bus=pci.0,addr=0x5' \
-device 'vmgenid,guid=3ec7b718-91c7-4dbc-8b64-49db404e5c1d' \
-device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
-device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
-device 'VGA,id=vga,bus=pci.0,addr=0x2' \
-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
-iscsi 'initiator-name=iqn.1993-08.org.debian:01:6351fc89b732' \
-drive 'file=/var/lib/vz/template/iso/focal-server-cloudimg-amd64.img,if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
-device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101' \
-device 'virtio-scsi-pci,id=virtioscsi0,bus=pci.3,addr=0x1,iothread=iothread-virtioscsi0' \
-drive 'file=/dev/pve/vm-100-disk-0,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on' \
-device 'scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' \
-netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' \
-device 'virtio-net-pci,mac=86:77:A5:EB:62:2B,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=1024,bootindex=102' \
-machine 'type=pc+pve0'

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 428.50g 0.20 0.41
root pve -wi-ao---- 96.00g
swap pve -wi-a----- 8.00g
vm-100-disk-0 pve Vwi-a-tz-- 5.00g data 0.00
vm-101-disk-0 pve Vwi-a-tz-- 8.00g data 10.62

Aug 5 09:04:45 pve pvedaemon[25694]: start VM 100: UPID:\pve:0000645E:000DB386:64CE650D:qmstart:100:root@pam:
Aug 5 09:04:45 pve pvedaemon[8694]: <root@pam> starting task UPID:\pve:0000645E:000DB386:64CE650D:qmstart:100:root@pam:
Aug 5 09:04:45 pve systemd[1]: Started 100.scope.
Aug 5 09:04:45 pve systemd-udevd[25711]: Using default interface naming scheme 'v247'.
Aug 5 09:04:45 pve systemd-udevd[25711]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Aug 5 09:04:46 pve pvedaemon[8695]: VM 100 qmp command failed - VM 100 not running
Aug 5 09:04:46 pve pvedaemon[8694]: VM 100 qmp command failed - VM 100 not running
Aug 5 09:04:46 pve systemd[1]: 100.scope: Succeeded.
Aug 5 09:04:46 pve systemd[1]: 100.scope: Consumed 1.217s CPU time.
Aug 5 09:04:46 pve pvedaemon[25694]: start failed: QEMU exited with code 1
Aug 5 09:04:46 pve pvedaemon[8694]: <root@pam> end task UPID:\pve:0000645E:000DB386:64CE650D:qmstart:100:root@pam: start failed: QEMU exited with code 1
Aug 5 09:04:46 pve pvedaemon[8694]: <root@pam> starting task UPID:\pve:00006477:000DB432:64CE650E:vncproxy:100:root@pam:
Aug 5 09:04:46 pve pvedaemon[25719]: starting vnc proxy UPID:\pve:00006477:000DB432:64CE650E:vncproxy:100:root@pam:
Aug 5 09:04:46 pve pveproxy[13860]: proxy detected vanished client connection
Aug 5 09:04:56 pve pvedaemon[25719]: connection timed out
Aug 5 09:04:56 pve pvedaemon[8694]: <root@pam> end task UPID:\pve:00006477:000DB432:64CE650E:vncproxy:100:root@pam: connection timed out
 
Last edited:
SOLVED:

Getting into root console and running:

Bash:
root@pve:/var/log# qm start 100

Got:
Bash:
bridge 'vmbr0' does not exist
kvm: -netdev type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on: network script /var/lib/qemu-server/pve-bridge failed with status 512
start failed: QEMU exited with code 1

Creating vmbr0, with it’s default config solved it: https://pve.proxmox.com/wiki/Network_Configuration (section Default Configuration using a Bridge)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!