VM "task error: start failed: QEMU exited with code 1"

Blake43

New Member
May 29, 2024
10
0
1
so, i had a fully working OMV VM and i restarted it and i saw this error. this was like 2 weeks ago. asked around and no one replied so i just wiped everything and started again. and it worked great for over a week. went in to restart my machine so the RAM cache would clear and further prove my system was ready to be moved to a remote location and now i cannot boot my VM again... same error as before.
Screenshot 2024-06-02 131621.png

so far i've:
  1. changed display to SPLICE(qxl) because i saw that on another forum post.
  2. restarted entire node again
I saw this post which talks about enabling AES-NI on the mobo. i haven't checked if it is or isnt enabled. I'm still a noob but here are the logs

Jun 02 13:27:01 pve2 pvedaemon[1142]: <root@pam> starting task UPID pve2:00001570:0001D146:665CD595:qmstart:102:root@pam:
Jun 02 13:27:01 pve2 pvedaemon[5488]: start VM 102: UPID pve2:00001570:0001D146:665CD595:qmstart:102:root@pam:
Jun 02 13:27:01 pve2 systemd[1]: Started 102.scope.
Jun 02 13:27:02 pve2 kernel: tap102i0: entered promiscuous mode
Jun 02 13:27:02 pve2 kernel: vmbr0: port 2(fwpr102p0) entered blocking state
Jun 02 13:27:02 pve2 kernel: vmbr0: port 2(fwpr102p0) entered disabled state
Jun 02 13:27:02 pve2 kernel: fwpr102p0: entered allmulticast mode
Jun 02 13:27:02 pve2 kernel: fwpr102p0: entered promiscuous mode
Jun 02 13:27:02 pve2 kernel: vmbr0: port 2(fwpr102p0) entered blocking state
Jun 02 13:27:02 pve2 kernel: vmbr0: port 2(fwpr102p0) entered forwarding state
Jun 02 13:27:02 pve2 kernel: fwbr102i0: port 1(fwln102i0) entered blocking state
Jun 02 13:27:02 pve2 kernel: fwbr102i0: port 1(fwln102i0) entered disabled state
Jun 02 13:27:02 pve2 kernel: fwln102i0: entered allmulticast mode
Jun 02 13:27:02 pve2 kernel: fwln102i0: entered promiscuous mode
Jun 02 13:27:02 pve2 kernel: fwbr102i0: port 1(fwln102i0) entered blocking state
Jun 02 13:27:02 pve2 kernel: fwbr102i0: port 1(fwln102i0) entered forwarding state
Jun 02 13:27:02 pve2 kernel: fwbr102i0: port 2(tap102i0) entered blocking state
Jun 02 13:27:02 pve2 kernel: fwbr102i0: port 2(tap102i0) entered disabled state
Jun 02 13:27:02 pve2 kernel: tap102i0: entered allmulticast mode
Jun 02 13:27:02 pve2 kernel: fwbr102i0: port 2(tap102i0) entered blocking state
Jun 02 13:27:02 pve2 kernel: fwbr102i0: port 2(tap102i0) entered forwarding state
Jun 02 13:27:02 pve2 kernel: tap102i0: left allmulticast mode
Jun 02 13:27:02 pve2 kernel: fwbr102i0: port 2(tap102i0) entered disabled state
Jun 02 13:27:02 pve2 kernel: fwbr102i0: port 1(fwln102i0) entered disabled state
Jun 02 13:27:02 pve2 kernel: vmbr0: port 2(fwpr102p0) entered disabled state
Jun 02 13:27:02 pve2 kernel: fwln102i0 (unregistering): left allmulticast mode
Jun 02 13:27:02 pve2 kernel: fwln102i0 (unregistering): left promiscuous mode
Jun 02 13:27:02 pve2 kernel: fwbr102i0: port 1(fwln102i0) entered disabled state
Jun 02 13:27:02 pve2 kernel: fwpr102p0 (unregistering): left allmulticast mode
Jun 02 13:27:02 pve2 kernel: fwpr102p0 (unregistering): left promiscuous mode
Jun 02 13:27:02 pve2 kernel: vmbr0: port 2(fwpr102p0) entered disabled state
Jun 02 13:27:02 pve2 pvedaemon[1142]: VM 102 qmp command failed - VM 102 not running
Jun 02 13:27:02 pve2 systemd[1]: 102.scope: Deactivated successfully.
Jun 02 13:27:02 pve2 systemd[1]: 102.scope: Consumed 1.496s CPU time.
Jun 02 13:27:02 pve2 pvedaemon[5488]: start failed: QEMU exited with code 1
Jun 02 13:27:02 pve2 pvedaemon[1142]: <root@pam> end task UPID pve2:00001570:0001D146:665CD595:qmstart:102:root@pam: start failed: QEMU exited with code 1


I could really use the help haha. thanks!
 
Last edited:
Hi,
please share the output of pveversion -v and qm config 102.
 
Hi,
please share the output of pveversion -v
root@pve2:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.4-3-pve)
pve-manager: 8.2.2 (running version: 8.2.2/9355359cd7afbae4)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.4-3
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.6
libpve-cluster-perl: 8.0.6
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.2
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.2.1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.2-1
proxmox-backup-file-restore: 3.2.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.6
pve-container: 5.1.10
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.0
pve-firewall: 5.0.7
pve-firmware: 3.11-1
pve-ha-manager: 4.0.4
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2






qm config 102.
root@pve2:~# qm config 102
agent: 1
boot: order=scsi0;net0;ide2
cores: 4
cpu: x86-64-v2-AES
ide2: local:iso/openmediavault_7.0-32-amd64.iso,media=cdrom,size=936M
memory: 4096
meta: creation-qemu=8.1.5,ctime=1717043356
name: OMV-Backup
net0: virtio=BC:24:11:B9:C6:3F,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-102-disk-0,size=32G
scsi1: HDD1:102/vm-102-disk-0.raw,size=14162654659K
scsi2: HDD2:102/vm-102-disk-0.raw,size=14162654659K
scsi3: HDD3:102/vm-102-disk-0.raw,size=14162654659K
smbios1: uuid=b2182ef5-4831-4d8d-a194-81f3fba1ad2f
sockets: 1
vmgenid: ada6edf1-5e71-4c70-a301-2a3f79898e9b
vmstatestorage: local-lvm
 
Please also share your storage configuration /etc/pve/storage.cfg. What is the output for the following:
Code:
pvesm path HDD1:102/vm-102-disk-0.raw
qemu-img info /path/you/got/from/the/previous/command/vm-102-disk-0.raw --output json
And similarly for the other two disks on HDD2 and HDD3.
 
Please also share your storage configuration /etc/pve/storage.cfg. What is the output for the following:
root@pve2:~# /etc/pve/storage.cfg
-bash: /etc/pve/storage.cfg: Permission denied

Code:
pvesm path HDD1:102/vm-102-disk-0.raw

qemu-img info /path/you/got/from/the/previous/command/vm-102-disk-0.raw --output json
And similarly for the other two disks on HDD2 and HDD3.
HDD1

root@pve2:~# pvesm path HDD1:102/vm-102-disk-0.raw
/mnt/pve/HDD1/images/102/vm-102-disk-0.raw

Code:
root@pve2:~# qemu-img info /mnt/pve/HDD1/images/102/vm-102-disk-0.raw --output json
{
    "children": [
        {
            "name": "file",
            "info": {
                "children": [
                ],
                "virtual-size": 14502558370816,
                "filename": "/mnt/pve/HDD1/images/102/vm-102-disk-0.raw",
                "format": "file",
                "actual-size": 7431314124800,
                "format-specific": {
                    "type": "file",
                    "data": {
                    }
                },
                "dirty-flag": false
            }
        }
    ],
    "virtual-size": 14502558370816,
    "filename": "/mnt/pve/HDD1/images/102/vm-102-disk-0.raw",
    "format": "raw",
    "actual-size": 7431314124800,
    "dirty-flag": false

HDD2

root@pve2:~# pvesm path HDD2:102/vm-102-disk-0.raw
/mnt/pve/HDD2/images/102/vm-102-disk-0.raw

Code:
root@pve2:~# qemu-img info /mnt/pve/HDD2/images/102/vm-102-disk-0.raw --output json
{
    "children": [
        {
            "name": "file",
            "info": {
                "children": [
                ],
                "virtual-size": 14502558370816,
                "filename": "/mnt/pve/HDD2/images/102/vm-102-disk-0.raw",
                "format": "file",
                "actual-size": 7431319175168,
                "format-specific": {
                    "type": "file",
                    "data": {
                    }
                },
                "dirty-flag": false
            }
        }
    ],
    "virtual-size": 14502558370816,
    "filename": "/mnt/pve/HDD2/images/102/vm-102-disk-0.raw",
    "format": "raw",
    "actual-size": 7431319175168,
    "dirty-flag": false

HDD3

root@pve2:~# pvesm path HDD3:102/vm-102-disk-0.raw
/mnt/pve/HDD3/images/102/vm-102-disk-0.raw

Code:
root@pve2:~# qemu-img info /mnt/pve/HDD3/images/102/vm-102-disk-0.raw --output json
{
    "children": [
        {
            "name": "file",
            "info": {
                "children": [
                ],
                "virtual-size": 14502558370816,
                "filename": "/mnt/pve/HDD3/images/102/vm-102-disk-0.raw",
                "format": "file",
                "actual-size": 7431319052288,
                "format-specific": {
                    "type": "file",
                    "data": {
                    }
                },
                "dirty-flag": false
            }
        }
    ],
    "virtual-size": 14502558370816,
    "filename": "/mnt/pve/HDD3/images/102/vm-102-disk-0.raw",
    "format": "raw",
    "actual-size": 7431319052288,
    "dirty-flag": false
}
 
I managed to get a backup of my data off the old drive so i ended up just wiping this whole situation to start fresh.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!