PCIe passthrough problems on HPE microserver gen10

herrJones

Active Member
Mar 10, 2018
8
0
41
47
Hi all,

I have a problem with passing my pcie DVB-S card to a VM.
When I boot the VM, network connections are disabled. The only option I have is to reboot the server (a HPE microserver gen10 with X3421 APU and 16GB RAM)

In order to do this I followed the guidelines in the PCI-passthrough wiki article.

output: dmesg | grep -e AMD -e amd
Code:
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-4.15.17-1-pve root=/dev/mapper/pve-root ro quiet amd_iommu=on iommu=pt
[    0.000000]   AMD AuthenticAMD
[    0.000000] RAMDISK: [mem 0x33ab3000-0x35d50fff]
[    0.000000] ACPI: ASF! 0x00000000DD5B5080 0000D6 (v32 AMD    SB700ASF 00000001 TFSM 000F4240)
[    0.000000] ACPI: IVRS 0x00000000DD5B51A0 0000D0 (v02 AMD    AGESA    00000001 AMD  00000000)
[    0.000000] ACPI: SSDT 0x00000000DD5B5270 000854 (v01 AMD    AGESA    00000001 AMD  00000001)
[    0.000000] ACPI: SSDT 0x00000000DD5B5AC8 00888F (v02 AMD    AGESA    00000002 MSFT 04000000)
[    0.000000] ACPI: CRAT 0x00000000DD5BE358 000550 (v01 AMD    AGESA    00000001 AMD  00000001)
[    0.000000] ACPI: SSDT 0x00000000DD5BE8A8 001492 (v01 AMD    CPMDFIGP 00000001 INTL 20120913)
[    0.000000] ACPI: SSDT 0x00000000DD5BFD40 00165E (v01 AMD    CPMCMN   00000001 INTL 20120913)
[    0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.15.17-1-pve root=/dev/mapper/pve-root ro quiet amd_iommu=on iommu=pt
[    0.046222] Spectre V2 : Mitigation: Full AMD retpoline
[    0.060000] smpboot: CPU0: AMD Opteron(tm) X3421 APU (family: 0x15, model: 0x60, stepping: 0x1)
[    0.060000] Performance Events: Fam15h core perfctr, AMD PMU driver.
[    1.421279] AMD-Vi: IOMMU performance counters supported
[    1.424145] AMD-Vi: Found IOMMU at 0000:00:00.2 cap 0x40
[    1.424147] AMD-Vi: Extended features (0x37ef22294ada):
[    1.424156] AMD-Vi: Interrupt remapping enabled
[    1.424156] AMD-Vi: virtual APIC enabled
[    1.424457] AMD-Vi: Lazy IO/TLB flushing enabled
[    1.424558] amd_uncore: AMD NB counters detected
[    1.424816] perf: AMD IBS detected (0x000007ff)
[    1.424825] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
[    3.584263] AMD0020:00: ttyS4 at MMIO 0xfedc6000 (irq = 10, base_baud = 3000000) is a 16550A
[    3.663014] [drm] amdgpu kernel modesetting enabled.
[    4.133859] AMD IOMMUv2 driver by Joerg Roedel <jroedel@suse.de>

Config 102 VM:
Code:
usb0: host=2040:8268
hostpci0: host=02:00,pcie=1
agent: 1
autostart: 0
balloon: 512
bootdisk: scsi0
cores: 2
ide2: local:iso/install-amd64-minimal-20180116T214503Z.iso,media=cdrom,size=303M
machine: q35
memory: 4096
name: htsGentoo
net0: virtio=8A:3A:AC:B2:8A:40,bridge=vmbr0
numa: 0
onboot: 0
ostype: l26
scsi0: local-lvm:vm-100-disk-1,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=70a9fbcb-8fb8-4f78-81c2-4f485b12d8c8
sockets: 1

relevant output from lspci -v -n
Code:
02:00.0 0400: 18c3:0720 (rev 01)
        Subsystem: 18c3:dd00
        Physical Slot: 2
        Flags: bus master, fast devsel, latency 0, IRQ 5
        Memory at fe810000 (32-bit, non-prefetchable) [size=64K]
        Memory at fe800000 (64-bit, non-prefetchable) [size=64K]
        Capabilities: [40] Power Management version 2
        Capabilities: [48] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [58] Express Endpoint, MSI 00
        Capabilities: [100] Device Serial Number 00-00-00-00-00-00-00-00
        Capabilities: [400] Virtual Channel
        Kernel driver in use: vfio-pci
        Kernel modules: ngene

When booting the VM, network connections are all disabled (so config site, ssh sessions are sudddenly unreachable).
from the kernel log:
Code:
May 22 22:23:36 pve qm[31267]: <root@pam> starting task UPID:pve:00007A24:000E5968:5B047C48:qmstart:102:root@pam:
May 22 22:23:37 pve kernel: [ 9403.865997] vmbr0: port 1(enp3s0f0) entered disabled state
May 22 22:23:37 pve kernel: [ 9403.866190] device enp3s0f0 left promiscuous mode
May 22 22:23:37 pve kernel: [ 9403.866198] vmbr0: port 1(enp3s0f0) entered disabled state
May 22 22:23:37 pve kernel: [ 9403.930155] ata9.00: disabled
May 22 22:23:38 pve kernel: [ 9404.998106] device tap102i0 entered promiscuous mode
May 22 22:23:38 pve kernel: [ 9405.024104] vmbr0: port 1(tap102i0) entered blocking state
May 22 22:23:38 pve kernel: [ 9405.024108] vmbr0: port 1(tap102i0) entered disabled state
May 22 22:23:38 pve kernel: [ 9405.024290] vmbr0: port 1(tap102i0) entered blocking state
May 22 22:23:38 pve kernel: [ 9405.024292] vmbr0: port 1(tap102i0) entered forwarding state
May 22 22:23:43 pve qm[31267]: <root@pam> end task UPID:pve:00007A24:000E5968:5B047C48:qmstart:102:root@pam: OK

from the syslog:
Code:
May 22 22:17:22 pve pveproxy[2076]: starting 1 worker(s)
May 22 22:17:22 pve pveproxy[2076]: worker 30615 started
May 22 22:18:00 pve systemd[1]: Starting Proxmox VE replication runner...
May 22 22:18:00 pve systemd[1]: Started Proxmox VE replication runner.
May 22 22:19:00 pve systemd[1]: Starting Proxmox VE replication runner...
May 22 22:19:00 pve systemd[1]: Started Proxmox VE replication runner.
May 22 22:20:00 pve systemd[1]: Starting Proxmox VE replication runner...
May 22 22:20:00 pve systemd[1]: Started Proxmox VE replication runner.
May 22 22:21:00 pve systemd[1]: Starting Proxmox VE replication runner...
May 22 22:21:00 pve systemd[1]: Started Proxmox VE replication runner.
May 22 22:21:30 pve systemd[1]: Started Session 6 of user root.
May 22 22:21:32 pve systemd[1]: Started Getty on tty2.
May 22 22:21:42 pve systemd[1]: Started Session 7 of user root.
May 22 22:21:45 pve systemd[1]: Started Getty on tty3.
May 22 22:21:55 pve systemd[1]: Started Session 8 of user root.
May 22 22:22:00 pve systemd[1]: Starting Proxmox VE replication runner...
May 22 22:22:00 pve systemd[1]: Started Proxmox VE replication runner.
May 22 22:22:29 pve systemd[1]: Started Getty on tty4.
May 22 22:23:00 pve systemd[1]: Starting Proxmox VE replication runner...
May 22 22:23:00 pve systemd[1]: Started Proxmox VE replication runner.
May 22 22:23:36 pve qm[31267]: <root@pam> starting task UPID:pve:00007A24:000E5968:5B047C48:qmstart:102:root@pam:
May 22 22:23:36 pve qm[31268]: start VM 102: UPID:pve:00007A24:000E5968:5B047C48:qmstart:102:root@pam:
May 22 22:23:37 pve kernel: [ 9403.865997] vmbr0: port 1(enp3s0f0) entered disabled state
May 22 22:23:37 pve kernel: [ 9403.866190] device enp3s0f0 left promiscuous mode
May 22 22:23:37 pve kernel: [ 9403.866198] vmbr0: port 1(enp3s0f0) entered disabled state
May 22 22:23:37 pve kernel: [ 9403.930155] ata9.00: disabled
May 22 22:23:37 pve systemd[1]: Started 102.scope.
May 22 22:23:37 pve systemd-udevd[31309]: Could not generate persistent MAC address for tap102i0: No such file or directory
May 22 22:23:38 pve kernel: [ 9404.998106] device tap102i0 entered promiscuous mode
May 22 22:23:38 pve kernel: [ 9405.024104] vmbr0: port 1(tap102i0) entered blocking state
May 22 22:23:38 pve kernel: [ 9405.024108] vmbr0: port 1(tap102i0) entered disabled state
May 22 22:23:38 pve kernel: [ 9405.024290] vmbr0: port 1(tap102i0) entered blocking state
May 22 22:23:38 pve kernel: [ 9405.024292] vmbr0: port 1(tap102i0) entered forwarding state
May 22 22:23:43 pve qm[31267]: <root@pam> end task UPID:pve:00007A24:000E5968:5B047C48:qmstart:102:root@pam: OK
May 22 22:24:00 pve systemd[1]: Starting Proxmox VE replication runner...
May 22 22:24:00 pve systemd[1]: Started Proxmox VE replication runner.
May 22 22:25:00 pve systemd[1]: Starting Proxmox VE replication runner...
May 22 22:25:00 pve systemd[1]: Started Proxmox VE replication runner.
May 22 22:26:00 pve systemd[1]: Starting Proxmox VE replication runner...
May 22 22:26:00 pve systemd[1]: Started Proxmox VE replication runner.
May 22 22:27:00 pve systemd[1]: Starting Proxmox VE replication runner...
May 22 22:27:00 pve systemd[1]: Started Proxmox VE replication runner.

It's a standalone node, so replication is probably not needed (? in fact: I have no idea)

Some suggestions?

Thanx,

Jan
 
Wild guess (haven't used PCI passthrough on proxmox yet):
can you try

hostpci0: host=02:00.0,pcie=1

instead of 02:00?

Perhaps 02:00 gives more devices to the VM than just the DVB card?
 
Hi all,

Still, whenever I try to boot my vm machine with these parameters, the general network stops, so I have to connect to my console in the attic.

If anyone has a suggestion to get the network back alive without rebooting the server, that'd save already heaps of time. It may be a microserver, but it takes the 'normal' server time to boot :-)

-- Jan
 
I know IOMMU is reported found, but is it actually enabled in the BIOS?
 
after starting a vm with passthrough, it seems I lose my physical network adapters

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp3s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether 98:f2:b3:e6:3c:f4 brd ff:ff:ff:ff:ff:ff
3: enp3s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP mode DEFAULT group default qlen 1000
link/ether 98:f2:b3:e6:3c:f5 brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 98:f2:b3:e6:3c:f4 brd ff:ff:ff:ff:ff:ff
5: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 98:f2:b3:e6:3c:f5 brd ff:ff:ff:ff:ff:ff

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
4: vmbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 5e:11:88:05:a2:84 brd ff:ff:ff:ff:ff:ff
 
Hello,

Sorry to bring this old thread back. I've recently bought a HPE microserver gen10 and I'm trying to passthrough the monitor, keyboard and mouse from a VM to be able to work and in the background to have some other vm's and containers running alone but after following the steps here https://pve.proxmox.com/wiki/Pci_passthrough like herrJones mentioned after starting the vm my network adapter go bad and there's really nothing I can do about it.

Did someone managed to get this working ?
I'm trying to use the onboard GPU so I do not have any GPU on PCI.
Do I need an additional GPU on PCI-e ?

Thank you
 
If you are trying to run the microserver as a Windows desktop with virtual machines/containers, you would probably find it easier to install Windowss and run hyper-v for VMs and docker for Windows for containers.
 
Hi @bobmc , thank you very much for your fast response.
I'm trying to run the micrserver as a Debian 11 Desktop + virtual machines/containers.
I do not plan to run any windows vm's
 
I've not done it so take this as just a suggestion

Install debian desktop and then install proxmox as per the instructions here

https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye
hi @bobmc I was having a look at this. I will give it a go.
I'm just a bit confused as if I will install debian 11 then proxmox on top ... that means I'm replacing the functionality of debian as a desktop and turn it into a full hypervisor only ?
But that I will find out tonight.
I'll get back hopefully a bit later after the instalation.

Thank you
 
I would like to confirm that everything seems to be working fine .
I'm able to boot the debian as the primary desktop and proxmox it's running additional in the backgroud without interfering with the desktop system.

Thank you very much for suggestion, now I just need to take care of configuring networking and the rest of stuff ... but so far so good.