[SOLVED] cannot start VM after upgraded to 7.0-11

budi

New Member
Oct 21, 2021
3
1
3
37
hi,

need help after upgraded the proxmox version from 6 to 7 when I created a new VM it's failed to start with the error below

Code:
kvm: -drive file=/var/lib/vz/template/iso/FreePBX-32bit-10.13.66.iso,if=none,id=drive-ide2,media=cdrom,aio=io_uring: Unable to use io_uring: failed to init linux io_uring ring: Function not implemented
TASK ERROR: start failed: QEMU exited with code 1

the Iso file is in place:
1634798983441.png

thanks in advance
 
hi Moayad,

thanks for your reply

this is the output for both command

Code:
root@jkta-svr-stg:~# pveversion -v
proxmox-ve: 7.0-2 (running kernel: 4.15.18-9-pve)
pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e)
pve-kernel-5.11: 7.0-6
pve-kernel-helper: 7.0-6
pve-kernel-5.4: 6.4-5
pve-kernel-5.11.22-3-pve: 5.11.22-6
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-4.15: 5.4-19
pve-kernel-4.15.18-30-pve: 4.15.18-58
pve-kernel-4.15.18-9-pve: 4.15.18-30
ceph-fuse: 14.2.21-1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.2.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-5
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-10
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.8-1
proxmox-backup-file-restore: 2.0.8-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.0-9
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 6.0.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-13
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1

Code:
root@jkta-svr-stg:~# qm config 102
boot: order=scsi0;ide2;net0
cores: 1
ide2: local:iso/FreePBX-32bit-10.13.66.iso,media=cdrom
memory: 1024
name: pbx1366
net0: virtio=D2:91:DC:71:72:FC,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-102-disk-0,size=50G
scsihw: virtio-scsi-pci
smbios1: uuid=5602ce59-a2f2-40e0-a988-b1df3e1c9162
sockets: 1
vmgenid: dac2acf2-e9f8-4326-8af4-220089f20740
 
proxmox-ve: 7.0-2 (running kernel: 4.15.18-9-pve)
Why you are running your host from the old pve-kernel?

Could you please try to run your PVE host from the latest pve-kernel version and see if the issue still occurs?
 
On the weekend I also got this problem.

lvscan shows all vms inactive.

The /dev/vms folder is missing.
lvchange -ay /dev/vms activates all disks and machines start.
After rebooting the server, everything is the same - the disks are inactive.

The logs contain the following:
Code:
Oct 23 10:23:34 pve1 lvm[961]:   pvscan[961] PV /dev/sdc online, VG vms is complete.
Oct 23 10:23:34 pve1 lvm[961]:   pvscan[961] VG vms skip autoactivation.

This happened after upgrading from 7.0-11 to 7.0-13
 
Last edited:
Why you are running your host from the old pve-kernel?

Could you please try to run your PVE host from the latest pve-kernel version and see if the issue still occurs?
after the server reboot now I can start the VM. as per the version the kernel is updated.

1635231511241.png


thanksfor your help Moayad.

best regards,

budi
 
  • Like
Reactions: Moayad
Hi,

Glad that you solved your issue :)

I'll go ahead and set your thread as [SOLVED] to help other people who have the same issue.