I noticed this patch was also removed from the 3.10 kernel branch:
pve-kernel-3.10.0 (3.10.0-40) unstable; urgency=low
* remmove override_for_missing_acs_capabilities.patch
What is the motive behind this removal? This change breaks PCIe Passthrough for me.
Well, in my setup the pci_stub_ids in /etc/initramfs-tools/modules are NOT needed, so you can safely remove them I think, maybe these interfere with the rest.
It should work, see my post below.
Also, looking closely to your GRUB_CMDLINE_LINUX_DEFAULT i noticed the scsi_mod doesn't have any argument and maybe then the pcie_acs_override doesn't get parsed correctly. Maybe you have to change the scsi_mod to: scsi_mod.scan=sync
Then update-grub and reboot
Good news! I was finally able to test the new kernel spirit and dietmar uploaded to the repositories, and it works!
Motherboard: Supermicro X10SL7-F
# uname -r
4.2.6-1-pve
# dpkg -l |grep pve-kernel-4
ii pve-kernel-4.2.6-1-pve 4.2.6-33 amd64 The Proxmox...
Well, I have the exact same motherboard for a setup here and the funny thing is, pcie passthrough (IOMMU splitting) actually works with the PVE3.4 kernel (Kernel 3.10) using pcie_acs_override=downstream. So the hardware IS capable of splitting the IOMMU groups.
Sorry to have caused some confusion, here's why i (prematurely) concluded this:
When i run PVE 3.4 WITHOUT pcie_acs_override=downstream the IOMMU groups are NOT split.
When i run PVE 3.4 WITH pcie_acs_override=downstream the IOMMU groups are split.
When i run PVE 4.1 WITH or WITHOUT...
I Have the exact same problem, it seems the 4.2 kernel used in proxmox doesn't include the pcie_acs_override patches included.
In PVE 3.4 (Kernel 3.10) the pcie_acs_override DO work as expected.
I'm willing to (try to) implement this feature if someone points me in the right direction of where to implement this (which files) and a global indication of what has to be done.
Hi all,
I am trying to get PCI-e passthrough of a single NIC working on PVE 3.4, passthrough is working but it always uses 2 NICs in stead of one.
I use a supermicro mainboard with 4 intel gigabit connections and like to passtrough one of them
lspci:
04:00.0 Ethernet controller: Intel...
Is such a thing even possible with KVM? And by such a thing i mean a feature comparable to VMWare Fault Tolerance (Where a shadow VM is running in memory sync on another host).
Kind regards,
Caspar
Hi, I just registered on masteringproxmox.com and the control question should be updated.
What is the current version of proxmox ve? Correct answer is 3.3 but should be 3.4 ;-)
Tom,
I tried with a fresh PVE3.3 install and hotplug works fine.
I tried with a fresh PVE3.4 install and hotplug doesn't work.
What version of Ceph are you using? I'm using latest Firefly (0.80.8)
Hi Spirit,
Thank you for implementing this great feature! Hopefully you will be able to test this with HA and submit a patch to be able to migrate all HA enabled VM's too. Thanks again!
I noticed in PVE 3.4 when i live migrate a VM it fails when there is a SPICE client (In my case Remote Viewer on Ubuntu) running for the VM.
Live migration works if the console is closed or when using the NoVNC console, only not with SPICE console open.
Here's the log including the error...
Yes, absolutely sure!
With commit https://git.proxmox.com/?p=qemu-server.git;a=commitdiff;h=8ead5ec7dc342e991f2e8cef3e6b6afcba549250 it gives the error message.
After reverting the commit it works.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.