Proxmox 4.0, PCI passthrough broken in several ways

twistero

Member
Oct 27, 2015
5
0
21
I've been using PCI passthrough on a Proxmox 3.4 host, and it has always worked flawlessly. Tried to upgrade to Proxmox 4.0, and there were numerous problems, so I rolled back and installed a fresh 4.0 separately to test.

Code:
~# pveversion -v
proxmox-ve: 4.0-16 (running kernel: 4.2.2-1-pve)
pve-manager: 4.0-50 (running version: 4.0-50/d3a6b7e5)
pve-kernel-4.2.2-1-pve: 4.2.2-16
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-23
qemu-server: 4.0-31
pve-firmware: 1.1-7
libpve-common-perl: 4.0-32
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-27
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.4-10
pve-container: 1.0-10
pve-firewall: 2.0-12
pve-ha-manager: 1.0-10
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.3-1
lxcfs: 0.9-pve2
cgmanager: 0.37-pve2
criu: 1.6.0-1
zfsutils: 0.6.5-pve4~jessie

Symptom 1: guests boot extremely slow
On 4.0, if a guest uses PCI passthrough, the boot process becomes very slow. A pfSense VM that boots under 2 minutes without passthrough. If passthrough is used, then it takes more than 3 minutes just to load the kernel. After kernel is loaded, the rest of the boot process seems to execute normally.
Similar symptoms occur if I pass through the SAS HBA card to my FreeNAS VM.

Symptom 2: IOMMU group includes multiple PCI devices
On 3.4 I have passed through one port on a Intel Pro/1000 VT quad port NIC to a FreeNAS VM and use the adjacent port on the host. On 4.0 when I pass through the same port, the adjacent port disappears from the host. Looks like it's because the two ports are in the same IOMMU group (there was no concept of IOMMU groups with kernel 2.6).

Code:
# lspci
<snip>
05:00.0 PCI bridge: Integrated Device Technology, Inc. [IDT] PES12N3A PCI Express Switch (rev 0e)
06:02.0 PCI bridge: Integrated Device Technology, Inc. [IDT] PES12N3A PCI Express Switch (rev 0e)
06:04.0 PCI bridge: Integrated Device Technology, Inc. [IDT] PES12N3A PCI Express Switch (rev 0e)
07:00.0 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)
07:00.1 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)
08:00.0 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)
08:00.1 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)

# find /sys/kernel/iommu_groups/ -type l
<snip>
/sys/kernel/iommu_groups/13/devices/0000:05:00.0
/sys/kernel/iommu_groups/14/devices/0000:06:02.0
/sys/kernel/iommu_groups/14/devices/0000:07:00.0
/sys/kernel/iommu_groups/14/devices/0000:07:00.1
/sys/kernel/iommu_groups/15/devices/0000:06:04.0
/sys/kernel/iommu_groups/15/devices/0000:08:00.0
/sys/kernel/iommu_groups/15/devices/0000:08:00.1


Discussions online seems to point to the 4.2 kernel for the slow boot issue (bbs.archlinux.org/viewtopic.php?id=203240). Running Debian's own 3.16 kernel on the new Proxmox 4.0 installation does indeed fix the slow boot, although the IOMMU group issue remains (the post linked above also refers to a patched kernel that fixes IOMMU grouping).
 

twistero

Member
Oct 27, 2015
5
0
21
OK, so this is indeed a kernel bug. See Bug 107561 bugzilla.kernel.org/show_bug.cgi?id=107561
Hopefully it gets fixed soon.
 
Jan 9, 2012
282
1
18
Hi,

i have exact the same Problem, boot is very slow and other strange things happens.
For example, without PT a Linux VM boot up very fast and without any Messages, with PT it hangs first on Bios, then Messages about Video Mode appears!? (see attached Image)
On Proxmox-3 all worked perfectly.

Whats up there??

pveversion -v
proxmox-ve: 4.0-21 (running kernel: 4.2.3-2-pve)
pve-manager: 4.0-57 (running version: 4.0-57/cc7c2b53)
pve-kernel-4.2.2-1-pve: 4.2.2-16
pve-kernel-4.2.3-2-pve: 4.2.3-21
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-24
qemu-server: 4.0-35
pve-firmware: 1.1-7
libpve-common-perl: 4.0-36
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-29
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.4-12
pve-container: 1.0-21
pve-firewall: 2.0-13
pve-ha-manager: 1.0-13
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.4-3
lxcfs: 0.10-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve6~jessie
 
Jan 9, 2012
282
1
18
As i read, i works with Kernel's 4.1x
Can i install/use a 4.1'er Kernel in PVE-4 or has that other drawbacks?

If Yes, how can i install it? And which one is the latest?
 

SwampRabbit

New Member
Dec 4, 2015
18
1
3
I can confirm the same issues with PCI pass-through, I noticed the bug when VMs hung with 8GB of ram.

I downgraded and everything is working good so far, believe I switched to kernel 4.1.3-7. No slowness and pass-through for USB and Radeon 270X is working like a champ.

Personally because the server I did it on was not online I pull down the kernel manually and used dpkg to install it. I think it was under one of the betas for version 4.

You could always use apt-get as well if your server is online. /wiki/Package_repositories

Sorry I can't be overly specific, not with the server or my install notes.
 
Last edited:

shawly

Member
Nov 8, 2015
25
2
23
Running proxmox-ve: 4.1-26 (running kernel: 4.2.6-1-pve)

It still has the iommu grouping problem, my onboard SAS controller is in the same iommu_group as my GPU which means I can't pass through the GPU when my SAS controller already has been passed through..

/sys/kernel/iommu_groups/1/devices/0000:00:01.0 <- PCI bridge - 8086:0c01
/sys/kernel/iommu_groups/1/devices/0000:00:01.1 <- PCI bridge - 8086:0c05
/sys/kernel/iommu_groups/1/devices/0000:01:00.0 <- AMD Radeon HD5450 - 1002:68f9
/sys/kernel/iommu_groups/1/devices/0000:01:00.1 <- AMD Radeon Audio - 1002:aa68
/sys/kernel/iommu_groups/1/devices/0000:02:00.0 <- LSI Logic / Symbios Logic SAS2308 - 1000:0086

Edit: I'd really like to buy a subscription so my posts finally won't be ignored anymore, but if this problem doesn't get fixed soon, I probably won't stay with Proxmox, so there is no point for me in paying 60 bucks if I don't use it anymore.
 
Last edited:
  • Like
Reactions: casparsmit

casparsmit

Active Member
Feb 24, 2015
38
1
28
Running proxmox-ve: 4.1-26 (running kernel: 4.2.6-1-pve)

It still has the iommu grouping problem, my onboard SAS controller is in the same iommu_group as my GPU which means I can't pass through the GPU when my SAS controller already has been passed through..

/sys/kernel/iommu_groups/1/devices/0000:00:01.0 <- PCI bridge - 8086:0c01
/sys/kernel/iommu_groups/1/devices/0000:00:01.1 <- PCI bridge - 8086:0c05
/sys/kernel/iommu_groups/1/devices/0000:01:00.0 <- AMD Radeon HD5450 - 1002:68f9
/sys/kernel/iommu_groups/1/devices/0000:01:00.1 <- AMD Radeon Audio - 1002:aa68
/sys/kernel/iommu_groups/1/devices/0000:02:00.0 <- LSI Logic / Symbios Logic SAS2308 - 1000:0086

I Have the exact same problem, it seems the 4.2 kernel used in proxmox doesn't include the pcie_acs_override patches included.

In PVE 3.4 (Kernel 3.10) the pcie_acs_override DO work as expected.
 

shawly

Member
Nov 8, 2015
25
2
23
I Have the exact same problem, it seems the 4.2 kernel used in proxmox doesn't include the pcie_acs_override patches included.

Oh really? I thought they did, that explains a lot! I'm currently trying to compile the latest kernel myself, though I run into many dependency errors.. But if I could get it to run and apply the acs override patch myself, it might work!

Edit: Haven't installed the kernel yet, but you can apply the patch this way, super easy!

Edit2: Well, I knew it was too good to be true, I applied the patch, but it didn't fix the grouping issue...
 
Last edited:

casparsmit

Active Member
Feb 24, 2015
38
1
28
Edit2: Well, I knew it was too good to be true, I applied the patch, but it didn't fix the grouping issue...

Sorry to have caused some confusion, here's why i (prematurely) concluded this:

When i run PVE 3.4 WITHOUT pcie_acs_override=downstream the IOMMU groups are NOT split.
When i run PVE 3.4 WITH pcie_acs_override=downstream the IOMMU groups are split.

When i run PVE 4.1 WITH or WITHOUT pcie_acs_override=downstream the IOMMU groups are NOT split. (And the IOMMU grouping is exactly the same as PVE3.4 WITHOUT pcie_acs_override).

This is why i thought the pacthes were not included, again sorry for the confusion.
 

shawly

Member
Nov 8, 2015
25
2
23
Sorry to have caused some confusion, here's why i (prematurely) concluded this:

When i run PVE 3.4 WITHOUT pcie_acs_override=downstream the IOMMU groups are NOT split.
When i run PVE 3.4 WITH pcie_acs_override=downstream the IOMMU groups are split.

When i run PVE 4.1 WITH or WITHOUT pcie_acs_override=downstream the IOMMU groups are NOT split. (And the IOMMU grouping is exactly the same as PVE3.4 WITHOUT pcie_acs_override).

This is why i thought the pacthes were not included, again sorry for the confusion.

No problem, that's a completly valid point and it was worth a try! In fact, the Makefile had the acs override patch excluded, so it could have been the solution.
It seems there is a problem with the 4.2 kernel, I just found this, the guy on post #2368 has the exact same motherboard and the exact same problem as me, he fixed it with the acs_override patch, but he was on kernel 3.15. So it seems that the ubuntu wily kernel Proxmox uses has issues with the acs override patch.
I'm gonna set up a temporary Arch install with the linux-vfio kernel from the AUR and see if the problem still persists.

Edit: Okay the issue is still there under Arch with LTS kernel 4.1.15 and the acs override patches..
 
Last edited:

SwampRabbit

New Member
Dec 4, 2015
18
1
3
I am still using pve-kernel 4.1.3 without passthrough issues, at least I haven't noticed any.
With this kernel I think you still have to specify "driver=vfio" in the guest.conf though.

Not sure if the grouping is messed up or not, but I am passing through 2 USB devices, and SSD using onboard SATA, and Radeon 270X to a Windows 7 machine to run Steam.

I am running the following off PCI-e slots in this node:
- IBM controller
- Radeon 270X
- Intel Quad NIC
- Broadcom NIC
 
Jan 9, 2012
282
1
18
I have a very curious Problem since switching to Proxmox-4.
One of my VM's is a yaVDR-Server to which I passthrough a DVB-S2 PCIe-Card.
In Proxmox-3 it ran without any Problems.

But when i now start this VM in Proxmox-4, the Boot Process hangs for ~10 Seconds at the Message "Booting from Hard Disk...". Then i get two error Messages and it hangs again for ~10 Seconds (see attached Screenshot). After that, the Machine boots up and as far as I could see, it works as it should.

In a Test without passthrough the PCIe Device, the Boot Process takes only a few Seconds (SSD Datastore) without any interruption and without any Error-Messages - as before under Proxmox-3.

Any Ideas?
 

Attachments

  • boot.JPG
    boot.JPG
    40.2 KB · Views: 9

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!