It’s working for me but I’m not using ZFS...Unfortunately that's not working on the microserver gen8. Tried that before recompiling and I was unable to passthrough
It’s working for me but I’m not using ZFS...Unfortunately that's not working on the microserver gen8. Tried that before recompiling and I was unable to passthrough
Oh great!Using it for now mainly for testing, mainly because it's a derivative from debian/buntu. Most of the included packages are things I'd be using anyway, and it's polished pretty damn well. Also, because the patch will be portable to pretty much anything based on debian buster without needing to recompile, I should be fine switching between pretty much whatever I want
On a 5.4 kernel ?It’s working for me but I’m not using ZFS...
Sorry, not sure I understand your question.Oh great!
If I were to run Pop!_OS ! Debian or Ubuntu in a VM, would I be able to use that to compile a Proxmox kernel? (as I type that it sounds like an obvious NO lol); if not, can I run Proxmox in a Proxmox VM for the purpose of this particular task only? I have my VMs on a separate SSD which has more space and less likely to run out of disk space when compiling...
Pretty sure he meant to tag the other Alex that popped into this thread within the last week xDOn a 5.4 kernel ?
I don't think there's a link between a passthrough error and zfs
did you try and dump the rom?I just tried with the 5.4 kernel and the working passthrough 5.3 configuration and no luck same error.
If I boot with the 5.3 kernel everything's ok. Guess I'll be stuck with a 5.3 kernel and that's not fun
No, sorry not on 5.4On a 5.4 kernel ?
I don't think there's a link between a passthrough error and zfs
I have tried that already, it doesn’t work. I think the issue is with what has changed/moved around since 5.3 has not been documented by the Proxmox team other than saying some drivers are now built into the kernel. There must be a way of getting it working...Do you have a thread where I can learn how to dump the rom.
Note thaton the 5.3 kernel the passthrough works only if I uncheck rombar on the VM. The LSI card I passthrough has been already initialized by the host during its BIOS POST.
Yes, I meant compiling a patched kernel in a VM to then copy over the .deb files to my main Proxmox host to install/use. Does this RMRR patch need to be created/compiled in Proxmox or can I use any Debian-based distro to do it?Sorry, not sure I understand your question.
If the question is can you compile a kernel from a VM, then yes absolutely. a Kernel is just a very advanced/complex program. If you wanted to you could compile the same Kernel IBM uses for PowerPC from a raspberry pi etc.
if you're asking can you then USE the patch to passthrough things from within a vm (for nested virtualization) then no. You would apply the patch to the hypervisor (the os on bare metal) then passthrough the hard ware in question to the vm. The vm would then be able to do things as needed. if you wanted to do nested virtualization, you would then wan't to have the host allow the guest to use virtualization instructions. the wiki is your friend: https://pve.proxmox.com/wiki/Nested_Virtualization
I have tried that already, it doesn’t work. I think the issue is with what has changed/moved around since 5.3 has not been documented by the Proxmox team other than saying some drivers are now built into the kernel. There must be a way of getting it working...
It don't think tweaking the kernel by disabling part of it to be a good long term solution.
I think I'll stick with ESXI for the moment as it does not have this issue until I can afford to change my hardware.
weaking the kernel by disabling part of it to be a
This patch works on the Debian iommu driver, it can be created on any debian-based system (although with a different tutorial).Yes, I meant compiling a patched kernel in a VM to then copy over the .deb files to my main Proxmox host to install/use. Does this RMRR patch need to be created/compiled in Proxmox or can I use any Debian-based distro to do it?
This bugreport is pretty scary for 5.4. So far, I haven't seen instability but I'll monitor for issues.It's actually a terrible solution as described in the QEmu bug report you linked (https://bugs.launchpad.net/qemu/+bug/1869006/comments/18). We can only hope that iLO doesn't tinker with these regions and just reads them.... but if it writes them... well, that may lead to a catastrophe if the checks are worked around
The thing is it was always this way. I looked around the changes in 5.4 and they essentially just added checks to more places than just IOMMU API.This bugreport is pretty scary for 5.4. So far, I haven't seen instability but I'll monitor for issues.
I tested the packages throughly and prepared a complete rundown of the issue with all the possible fixes & technical reasons.
https://github.com/kiler129/relax-intel-rmrr
Anyone interested can either download precompiled debs or build them from sources. After installation flipping a kernel switch will activate the patch.
Enjoy. Open source FTW
So for the non-technical (myself), are you saying you have passthrough working on the latest kernels?
Yes, it is running Proxmox 6.2-4 with Linux v5.4.65-1. For the ultimate test I was actually stress-testing the solution with the worst case scenario: passthrough of a RAID/HBA soldered in the motherboard of my HPE Microserver Gen8 which is also tied in one IOMMU group with some LPC controller (handles RS232 and such):
View attachment 20733
Works perfectly after installation of the debs + adding the boot option For a normal user it's literally a 5 minutes fix.