Kernel 5.4 - Error when attempting to start a VM with passthrough

kriansa

Member
Mar 24, 2020
13
4
23
32
On the new 5.4 kernel, launching an instance with a PCI passthrough device fails with the following error:

Code:
kvm: -device vfio-pci,host=0000:05:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: VFIO_MAP_DMA: -22
kvm: -device vfio-pci,host=0000:05:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio_dma_map(0x7f48cd667c80, 0x0, 0x80000000, 0x7f4645400000) = -22 (Invalid argument)
kvm: -device vfio-pci,host=0000:05:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio 0000:05:00.0: failed to setup container for group 1: memory listener initialization failed for container: Invalid argument
TASK ERROR: start failed: QEMU exited with code 1

This is exactly the same scenario that this user is facing.

My hardware specs:
- HP Microserver Gen8
- Processor Xeon E3 1265L v2
- Fujitsu D2607 (LSI SAS 2008) SAS adapter flashed in IT mode

Currently, due to issues with this HP motherboard, I need to recompile the Kernel using the normrr patch.

I have a VM passing through the SAS adapter device, and using kernel 5.3 it works fine. However, with the newer 5.4 release, I'm unable to start it.
 
Thanks for opening this as new thread over here. Can you please also post your VM's config: qm config <vmid>

And if you reboot into the 5.3.18 kernel it just works?
 
Sure,

Code:
boot: c
bootdisk: ide0
cores: 8
cpu: host
hostpci0: 05:00.0
ide0: local-lvm:vm-100-disk-0,size=8G
memory: 10240
name: FreeNAS
net0: virtio=CA:1C:4D:69:1F:0E,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: other
scsihw: virtio-scsi-pci
smbios1: uuid=f67b4b42-110a-4c07-9fb5-c101ea37a230
sockets: 1
vmgenid: 5f912b4f-21ea-4eda-b8ed-b53a50662886

If I reboot on 5.3 (as I am using right now) it just works.
 
Got the same issue here !
No luck for the moment I have to stick with 5.3.
Did you find a solution ?
 
@alexxedo

Yes, unfortunately you either upgrade your hardware or give up on Proxmox with 5.4+ kernels. I even ended up automating the build process for 5.3 so you wouldn't need to recompile it manually every time - but then unfortunately 5.4 came and I had to give up on Proxmox.

Here's the link if you still wanna check it out: https://github.com/kriansa/pve-kernel-builder

EDIT: Just to be clear -- this has nothing to do with Proxmox itself, but with Kernel and the buggy hardware we use.
 
On another note: Why did you try to passthrough the SAS adapter at all?
If it can be avoided I'd always refrain from passing through real HW, there are just to many ways where it can go wrong and most are HW related. The benefits are often also rather benign, besides from GPU passthrough.
 
I used to virtualize an instance of FreeNAS and passing through a SAS adapter is the only way to access the raw hard drives because ZFS doesn't work well with virtualization or RAID.

Now I just installed FreeNAS on bare metal and virtualized my appliances with bhyve. Not perfect but it works.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!