QEMU exited with code 1

milennium

New Member
Feb 26, 2024
9
1
3
Hello,

I had to shutdown a 3 node cluster yesterday because of power outages during extreme weather conditions. I did a mistake but launching a shutdown of the iscsi storage server before the 3 node cluster was down. It hosts only 1 virtual disk for a Windows vm.


Since the end of the power outage, i have the same error when I try to boot the Windows VM.
Code:
kvm: -device scsi-hd,bus=virtioscsi1.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1: unwanted /dev/sg*
TASK ERROR: start failed: QEMU exited with code 1

I checked if the storage was still available and I can set nothing weird.

I did a journal -f during vm boot, and it seems to give another error not related to storage:

Code:
May 23 11:02:32 pve pvedaemon[17437]: start VM 450211: UPID:pve:0000441D:0001243E:664F5A88:qmstart:450211:root@pam:
May 23 11:02:32 pve pvedaemon[3235]: <root@pam> starting task UPID:pve:0000441D:0001243E:664F5A88:qmstart:450211:root@pam:
May 23 11:02:33 pve systemd[1]: Started 450211.scope.
May 23 11:02:34 pve charon[2907]: 05[KNL] interface tap450211i0 activated
May 23 11:02:34 pve kernel: tap450211i0: entered promiscuous mode
May 23 11:02:34 pve charon[2907]: 07[KNL] interface fwbr450211i0 activated
May 23 11:02:34 pve charon[2907]: 11[KNL] interface fwln450211i0 activated
May 23 11:02:34 pve charon[2907]: 09[KNL] interface fwpr450211p0 activated
May 23 11:02:34 pve kernel: vmbr1: port 11(fwpr450211p0) entered blocking state
May 23 11:02:34 pve kernel: vmbr1: port 11(fwpr450211p0) entered disabled state
May 23 11:02:34 pve kernel: fwpr450211p0: entered allmulticast mode
May 23 11:02:34 pve kernel: fwpr450211p0: entered promiscuous mode
May 23 11:02:34 pve kernel: vmbr1: port 11(fwpr450211p0) entered blocking state
May 23 11:02:34 pve kernel: vmbr1: port 11(fwpr450211p0) entered forwarding state
May 23 11:02:34 pve kernel: fwbr450211i0: port 1(fwln450211i0) entered blocking state
May 23 11:02:34 pve kernel: fwbr450211i0: port 1(fwln450211i0) entered disabled state
May 23 11:02:34 pve kernel: fwln450211i0: entered allmulticast mode
May 23 11:02:34 pve kernel: fwln450211i0: entered promiscuous mode
May 23 11:02:34 pve kernel: fwbr450211i0: port 1(fwln450211i0) entered blocking state
May 23 11:02:34 pve kernel: fwbr450211i0: port 1(fwln450211i0) entered forwarding state
May 23 11:02:34 pve kernel: fwbr450211i0: port 2(tap450211i0) entered blocking state
May 23 11:02:34 pve kernel: fwbr450211i0: port 2(tap450211i0) entered disabled state
May 23 11:02:34 pve kernel: tap450211i0: entered allmulticast mode
May 23 11:02:34 pve kernel: fwbr450211i0: port 2(tap450211i0) entered blocking state
May 23 11:02:34 pve kernel: fwbr450211i0: port 2(tap450211i0) entered forwarding state
May 23 11:02:35 pve charon[2907]: 06[KNL] interface tap450211i0 deleted
May 23 11:02:35 pve kernel: tap450211i0: left allmulticast mode
May 23 11:02:35 pve kernel: fwbr450211i0: port 2(tap450211i0) entered disabled state
May 23 11:02:35 pve charon[2907]: 07[KNL] interface tap450211i0 activated
May 23 11:02:35 pve charon[2907]: 08[KNL] interface fwln450211i0 deactivated
May 23 11:02:35 pve charon[2907]: 10[KNL] interface fwpr450211p0 deactivated
May 23 11:02:35 pve kernel: fwbr450211i0: port 1(fwln450211i0) entered disabled state
May 23 11:02:35 pve kernel: vmbr1: port 11(fwpr450211p0) entered disabled state
May 23 11:02:35 pve charon[2907]: 15[KNL] interface fwln450211i0 deleted
May 23 11:02:35 pve kernel: fwln450211i0 (unregistering): left allmulticast mode
May 23 11:02:35 pve kernel: fwln450211i0 (unregistering): left promiscuous mode
May 23 11:02:35 pve kernel: fwbr450211i0: port 1(fwln450211i0) entered disabled state
May 23 11:02:35 pve bgpd[2594]: [VCGF0-X62M1][EC 100663301] INTERFACE_STATE: Cannot find IF fwln450211i0 in VRF 0
May 23 11:02:35 pve charon[2907]: 06[KNL] interface fwpr450211p0 deleted
May 23 11:02:35 pve kernel: fwpr450211p0 (unregistering): left allmulticast mode
May 23 11:02:35 pve kernel: fwpr450211p0 (unregistering): left promiscuous mode
May 23 11:02:35 pve kernel: vmbr1: port 11(fwpr450211p0) entered disabled state
May 23 11:02:35 pve bgpd[2594]: [VCGF0-X62M1][EC 100663301] INTERFACE_STATE: Cannot find IF fwpr450211p0 in VRF 0
May 23 11:02:35 pve charon[2907]: 12[KNL] interface fwbr450211i0 deactivated
May 23 11:02:35 pve charon[2907]: 10[KNL] interface fwbr450211i0 deleted
May 23 11:02:35 pve bgpd[2594]: [VCGF0-X62M1][EC 100663301] INTERFACE_STATE: Cannot find IF fwbr450211i0 in VRF 0
May 23 11:02:35 pve kernel:  zd736: p1 p2
May 23 11:02:35 pve charon[2907]: 15[KNL] interface tap450211i0 deactivated
May 23 11:02:35 pve charon[2907]: 14[KNL] interface tap450211i0 deleted
May 23 11:02:35 pve bgpd[2594]: [VCGF0-X62M1][EC 100663301] INTERFACE_STATE: Cannot find IF tap450211i0 in VRF 0
May 23 11:02:35 pve pvedaemon[3236]: VM 450211 qmp command failed - VM 450211 not running
May 23 11:02:35 pve systemd[1]: 450211.scope: Deactivated successfully.
May 23 11:02:35 pve systemd[1]: 450211.scope: Consumed 1.905s CPU time.
May 23 11:02:35 pve pvedaemon[17437]: start failed: QEMU exited with code 1
May 23 11:02:35 pve pvedaemon[3235]: <root@pam> end task UPID:pve:0000441D:0001243E:664F5A88:qmstart:450211:root@pam: start failed: QEMU exited with code 1


Code:
root@pve:~# pveversion -v

proxmox-ve: 8.2.0 (running kernel: 6.8.4-3-pve)
pve-manager: 8.2.2 (running version: 8.2.2/9355359cd7afbae4)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.4-3
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
dnsmasq: 2.89-1
frr-pythontools: 8.5.2-1+pve1
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx8
intel-microcode: 3.20231114.1~deb12u1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.6
libpve-cluster-perl: 8.0.6
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.1
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.2.1
libqb0: 1.0.5-1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
openvswitch-switch: residual config
proxmox-backup-client: 3.2.2-1
proxmox-backup-file-restore: 3.2.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.6
pve-container: 5.1.10
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.0
pve-firewall: 5.0.7
pve-firmware: 3.11-1
pve-ha-manager: 4.0.4
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2


Code:
root@pve:~# qm config 450211

agent: 1,fstrim_cloned_disks=1
balloon: 4096
boot: order=sata0
cores: 4
cpu: kvm64,flags=+aes
description: VEEAM windows server 2019%0A%0Aargs%3A -set device.scsi0.logical_block_size=4096 -set device.scsi0.physical_block_size=4096%0Aargs%3A -set device.scsi1.logical_block_size=4096 -set device.scsi1.physical_block_size=4096
hotplug: disk,network
machine: pc-q35-6.1
memory: 16000
meta: creation-qemu=6.1.0,ctime=1639538522
name: VWBCK01
net0: virtio=8A:3E:E3:58:3D:E0,bridge=vmbr1,firewall=1,tag=3102
numa: 1
ostype: win10
sata0: local-4k:vm-450211-disk-0,discard=on,format=raw,size=457863M,ssd=1
scsi0: datas:vm-450211-disk-0,discard=on,iothread=1,size=9000G
scsi1: zfs-iscsi:vm-450211-disk-1,discard=on,iothread=1,size=11000G
scsihw: virtio-scsi-single
smbios1: uuid=c7638b94-b5b2-4ec5-a2ea-9ee37fbbd385
sockets: 1
tablet: 0
vmgenid: 255aa8a0-d1e1-4f77-a4e3-fb19e7bc8c92


Code:
root@pve:~# kvm --version

QEMU emulator version 8.1.5 (pve-qemu-kvm_8.1.5-6)
Copyright (c) 2003-2023 Fabrice Bellard and the QEMU Project developers

I tried to find same cases on the forum and I have no idea.


Did someone have the same error?
 
The error evolved to:

Code:
TASK ERROR: Could not find lu_name for zvol vm-450211-disk-1 at /usr/share/perl5/PVE/Storage/ZFSPlugin.pm line 112.

The root of the issue here since the beginning, iscsi target service, rtslib-fb-targetctl, was startting before zpool.
In my case, I had to restore a good previous conf file. ( /etc/rtslib-fb-target/backup ) and restart rtslib-fb-targetctl service.

I edited rtslib-fb-targetctl.service to add:

Code:
[Unit]
Requisite=zfs-mount.service
After=zfs-mount.service

I didnt have the chance to reboot to test it. I ll edit the post accordingly.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!