Ubuntu VM suddenly fails to boot with VirtIO SCSI controller

rtgy

New Member
May 14, 2023
9
3
3
I have a fully up to date Ubuntu 18.04 VM that is suddenly failing to boot due to not finding it's logical volume.
I think this may have happened since I updated to Proxmox 7.4.3 but can't be sure as I didn't check it straight away.

If I change the SCSI controller to a non VirtIO SCSI one, it boots ok but then can't find it's network interfaces (VirtIO).
Again if I change those to E1000 that starts working again.

1684057317776.png

Any ideas? I have the qemu guest agent installed but it seems like the kernel is somehow missing the required drivers.
 
Hi,
that does sound strange. Please share the output of pveversion -v and qm config <ID> with the ID of your VM. Are you using a special kernel inside the VM? What does uname -a in the VM tell you?

You could try downgrading to an older version of QEMU to see if that makes a difference. Check in /var/log/apt/history.log (and others) to see what version you upgraded from. Then use apt install pve-qemu-kvm=X.Y.Z-W replacing the letters with the actual numbers of the previously installed version.

The guest agent should not be relevant here.
 
This is the current info after changing the network and storage type.
I'll try rolling back qemu :)

Versions
Code:
# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.107-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.4-2
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-11
pve-kernel-5.15.107-1-pve: 5.15.107-1
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.12-1-pve: 5.15.12-3
pve-kernel-5.15.5-1-pve: 5.15.5-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-3-pve: 5.13.19-7
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.4.157-1-pve: 5.4.157-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-4
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.1-1
proxmox-backup-file-restore: 2.4.1-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.5
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-1
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1

VM Config
Code:
# qm config 110
agent: 1
bios: ovmf
boot: order=scsi0;scsi1
cores: 4
cpu: host
efidisk0: nvme-gen4:vm-110-disk-0,size=4M
machine: pc-q35-7.0
memory: 10240
name: docker2
net0: e1000=46:92:35:0F:89:9B,bridge=vmbr0
net1: e1000=DA:E8:0F:3A:EB:49,bridge=vmbr1
numa: 0
onboot: 1
ostype: l26
scsi0: nvme-gen4:vm-110-disk-1,size=42G
scsi1: nvme-gen4:vm-110-disk-2,size=100G
scsihw: pvscsi
smbios1: uuid=d7395f10-2693-4eac-8416-e42b7f4fbd6e
sockets: 1
vmgenid: 560d28e0-c7be-4815-b039-36ba857908b4

Kernel
Code:
~ ❯ uname -a
Linux docker2 4.15.0-210-generic #221-Ubuntu SMP Tue Apr 18 08:32:52 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
 
Last edited:
Previous version of pve-qemu-kvm was 7.1.0-4, I rolled it back and tried VirtIO again and it's still failing to boot.
The config with VirtIO:
Code:
# qm config 110
agent: 1
bios: ovmf
boot: order=scsi0;scsi1
cores: 4
cpu: host
efidisk0: nvme-gen4:vm-110-disk-0,size=4M
machine: pc-q35-7.0
memory: 10240
name: docker2
net0: e1000=46:92:35:0F:89:9B,bridge=vmbr0
net1: e1000=DA:E8:0F:3A:EB:49,bridge=vmbr1
numa: 0
onboot: 1
ostype: l26
scsi0: nvme-gen4:vm-110-disk-1,size=42G
scsi1: nvme-gen4:vm-110-disk-2,size=100G
scsihw: virtio-scsi-pci
smbios1: uuid=d7395f10-2693-4eac-8416-e42b7f4fbd6e
sockets: 1
vmgenid: 560d28e0-c7be-4815-b039-36ba857908b4
 
Previous version of pve-qemu-kvm was 7.1.0-4, I rolled it back and tried VirtIO again and it's still failing to boot.
So it's not because of that upgrade. Did you upgrade the kernel inside the VM recently or change some settings there? Can you try booting with an older kernel inside the VM?
 
So it's not because of that upgrade. Did you upgrade the kernel inside the VM recently or change some settings there? Can you try booting with an older kernel inside the VM?
No I didn't, it hadn't been updated in a while. Once I got it booted again I did a dist-upgrade and tried VirtIO again but that didn't help.
I tried the previous 2 kernels that were still installed and they all behave the same.
If it helps, within the initramfs prompt I can't see any disks under /dev.
 
Since you are using EFI, can you check if downgrading pve-edk2-firmware makes a difference?
 
That worked!
I rolled back from pve-edk2-firmware_3.20230228-2 to pve-edk2-firmware_3.20220526-1 and the VM boots with the VirtIO controller and the network interfaces work as well :)
 
I reproduced the issue now, and it seems that using CPU type kvm64 instead of host for the VM is another workaround. Then you don't have to keep the package downgraded.
 
  • Like
Reactions: rtgy

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!