[SOLVED] Oracle Linux 9 VM Cannot Use VirtIO SCSI Controller

zl1952

New Member
May 1, 2024
2
2
3
I am currently testing the native import feature for a PoC of migrating our standard VMs from ESXi to Proxmox. For the Windows VMs, I found it very seamless so far. I am running into a slight problem however after importing an Oracle Linux 9 virtual machine. When the VM hardware is set to use the VMware PVSCSI SCSI controller, the VM boots and works as expected. However, when the VirtIO SCSI or VirtIO SCSI single SCSI controller is set, the VM fails to boot. During the boot process, it gets to the grub boot screen with the kernels listed, then after the kernel is selected, there is a grey screen for a few minutes, before emergency mode is entered after this.
Simply stopping the VM, changing the SCSI controller back to VMware PVSCSI, and starting the VM again allows it to boot and function again as expected.
If I understand correctly, the VirtIO has the better performance for Proxmox VMs and thus would be preferrable to use.

The network device on the VM is using the virtio adapter type successfully.

Here is the following information about the OL9 VM:
Linux kernel = 5.15.0-204.147.6.2.el9uek.x86_64
BIOS = OVMF (UEFI)

NAME="Oracle Linux Server"
VERSION="9.3"
ID="ol"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="9.3"
PLATFORM_ID="platform:el9"
PRETTY_NAME="Oracle Linux Server 9.3"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:eek:racle:linux:9:3:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://github.com/oracle/oracle-linux"

ORACLE_BUGZILLA_PRODUCT="Oracle Linux 9"
ORACLE_BUGZILLA_PRODUCT_VERSION=9.3
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=9.3


Here is the information about the Proxmox host:
proxmox-ve: 8.2.0 (running kernel: 6.8.4-2-pve)
pve-manager: 8.2.2 (running version: 8.2.2/9355359cd7afbae4)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.4-2
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.6
libpve-cluster-perl: 8.0.6
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.1
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.2.1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.0-1
proxmox-backup-file-restore: 3.2.0-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.1
pve-cluster: 8.0.6
pve-container: 5.0.10
pve-docs: 8.2.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.0
pve-firewall: 5.0.5
pve-firmware: 3.11-1
pve-ha-manager: 4.0.4
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-5
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2
 
It sounds like the default Oracle kernel doesn't have the virtio disk module baked in, you may need to recompile it.
I'm currently downloading OL R9-U3 and will try to test install
 
If rescue mode isnt available, detach and reattach the disks as IDE, you get the kernel booted then somehow after playing with the controller and disk settings.
but add an viortioblock disk with 1gb or something, just that the driver loads later (after booting), then dracut -f and you can delete that 1gb disk and change the real ones to virtioblock.
Maybe you need to do this twice to change controller and disk separately, dunno.
I had multiple VM's Centos 6/7, Debian, Ubuntu, SUSE 15, RHEL 8 etc, it worked always somehow to make virtioblock and virtio scsi single controller working. Some didn't had rescue mode, cause some had grub timeout of 0 xD

There is no kernel without virtio drivers, never seen one.
There is just initramfs without that modules, so you need to regenerate initramfs always somehow :)
Cheers
 
Yep, fresh install boots just fine on virtio disks.

Here is the vm config for reference:
Code:
agent: 1,fstrim_cloned_disks=1
balloon: 2048
bios: ovmf
boot: order=virtio0;ide2
cores: 2
cpu: x86-64-v2-AES,flags=+aes
efidisk0: zfs2nvme:vm-127-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
ide2: dir1:iso/OracleLinux-R9-U3-x86_64-dvd.iso,media=cdrom,size=10523M
machine: q35
memory: 4096
meta: creation-qemu=8.2.2,ctime=1714601689
name: oracle9-virtio-test
net0: virtio=BC:24:11:1D:83:06,bridge=vmbr0
numa: 0
ostype: l26
scsihw: virtio-scsi-single
smbios1: uuid=b1b998c9-1b3d-4f48-872b-b05c2ff463e3
sockets: 1
vga: virtio
virtio0: ztosh10:vm-127-disk-0,backup=0,cache=writeback,discard=on,iothread=1,size=32G
 
Boot to "rescue" kernel, and use dracut to regenerate initramfs for OL VM.
Thank you very much! This fixed the issue and the VM could then boot with the VirtIO SCSI controller.

As somewhat of a Linux noob, I did have to Google this but wanted to share the procedure for other noobs like me.

1. Mount Oracle Linux installation .iso file to Proxmox VM.
2. Change boot order of VM so that it boots to the .iso
3. Select the Troubleshooting > Boot to Recovery mode option.
4. At Recovery mode, enter 1 for Continue, then press ENTER to get a shell
5. Enter chroot /mnt/sysimage
6. Enter dracut --regenerate-all -f && grub2-mkconfig -o /boot/grub2/grub.cfg > Takes a few minutes to complete, then should output "done" if successful.
7. Shut down VM, change boot order back to OS drive. Should now boot as expected!

If rescue mode isnt available, detach and reattach the disks as IDE, you get the kernel booted then somehow after playing with the controller and disk settings.
but add an viortioblock disk with 1gb or something, just that the driver loads later (after booting), then dracut -f and you can delete that 1gb disk and change the real ones to virtioblock.
Maybe you need to do this twice to change controller and disk separately, dunno.
I had multiple VM's Centos 6/7, Debian, Ubuntu, SUSE 15, RHEL 8 etc, it worked always somehow to make virtioblock and virtio scsi single controller working. Some didn't had rescue mode, cause some had grub timeout of 0 xD

There is no kernel without virtio drivers, never seen one.
There is just initramfs without that modules, so you need to regenerate initramfs always somehow :)
Cheers
Thanks for providing this alternate option as well to add the dummy virtio disk similar to the Windows procedure, then running dracut -f and changing the real one disk to virtioblock.
Thanks also for the explanation that "There is just initramfs without that modules, so you need to regenerate initramfs always somehow :)" - That helped make it more clear to me, that it's not a problem with the Linux kernel/OS but rather the initramfs. Proxmox will surely increase my Linux knowledge especially with the help of everyone here on the forums!

Yep, fresh install boots just fine on virtio disks.

Here is the vm config for reference:
Code:
agent: 1,fstrim_cloned_disks=1
balloon: 2048
bios: ovmf
boot: order=virtio0;ide2
cores: 2
cpu: x86-64-v2-AES,flags=+aes
efidisk0: zfs2nvme:vm-127-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
ide2: dir1:iso/OracleLinux-R9-U3-x86_64-dvd.iso,media=cdrom,size=10523M
machine: q35
memory: 4096
meta: creation-qemu=8.2.2,ctime=1714601689
name: oracle9-virtio-test
net0: virtio=BC:24:11:1D:83:06,bridge=vmbr0
numa: 0
ostype: l26
scsihw: virtio-scsi-single
smbios1: uuid=b1b998c9-1b3d-4f48-872b-b05c2ff463e3
sockets: 1
vga: virtio
virtio0: ztosh10:vm-127-disk-0,backup=0,cache=writeback,discard=on,iothread=1,size=32G
Dumb question from my side - what could be the cause in our environment for why it does not boot with the VirtIO SCSI controller after migrating without regenerating initramfs first? Or is it expected in some cases that after an ESXi > Proxmox migration, that this needs to be done? I did test a barebones Debian 12 migration and it worked with VirtIO SCSI by default without having to regenerate initramfs
 

Attachments

  • regenerate initramfs.png
    regenerate initramfs.png
    28.9 KB · Views: 0
Im not sure, i had some linuxes that i migrated, that worked with virtio directly either, without regenerating initramfs. but i regenerated anyway.
Even some CentOs'ses booted without me needed to do anything.
I think the answer to this question how the hardware was configured before on esxi side, but i had anyway no interrest to find that out at the time i migrated. I just stupidly regenerated always initramfs and made the linux vms working with virtio everything without thinking.
And it always worked somehow, the one or another way xD
But i never needed to boot actually any iso to make the linux vms work, i just changed the vm settings in proxmox to everything possible (controller and drive ide/sata..) and at some point the vm booted and when it booted i regenerated xD
i remember i had on one vm set to i440fx instead of q35, to make it boot. but thats a rare case.

Some very old vm's with very old kernel, couldnt boot with cpu=host, so i used instead x86-64-v4.
Its a genoa 9374f server, so on older cpus you should use instead x86-64-v3 or even x86-64-v2+aes.
Thats for all VMs below kernel 5, i had some with kernel 2.6/4 xD

Cheers
 
  • Like
Reactions: zl1952

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!