DELL R710 (4 built-in Ethenet cards) - multiple NIC bonding/passthrough

jaceqp

Well-Known Member
May 28, 2018
95
7
48
43
Hi there.
Since I'm using 3 VM's on a Proxmox I'm trying to use more than just one "vmbr'ed" ethernet device for network traffic.

DELL R710 has 4x1 gigabit NICs. However PROXMOX shows them in a same PCIE group. So far forcing passthrough is a fail.

# lspci | grep Ethernet
01:00.0 Ethernet controller: Broadcom Limited NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
01:00.1 Ethernet controller: Broadcom Limited NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
02:00.0 Ethernet controller: Broadcom Limited NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
02:00.1 Ethernet controller: Broadcom Limited NetXtreme II BCM5709 Gigabit Ethernet (rev 20)


Any ideas on successful PCIE passthrough ?
I get the "vfio: failed to set iommu for container: Operation not permitted" and fail vm startup while trying any pcie passthrough (even with passthrough entire group).

On the other hand... bonding NICs into one interface might also do the job (at least for now).
I'm thinking of using 1st Broadcom as a Proxmox management and other 3 for bond/stack and use it for VM's.
BTW: The network has single router/gateway with DHCP on it.
It apears I can't define same gateway on more than 1 PM Host interface. If so - how can I allow internet access to both eno1 (Proxmox host) and bond0 interface?
So far I have:
eno1 = static IP proxmox host ethernet interface
bond0 = bond interface with eno2, eno3 and eno4
vmbr0 = static IP bridge with bond0 attached

It seems even if my VMs have DHCP enabled and VM get IP from DHCP server I have no internet connection (just LAN available/browseable). I've tested connections on Win2016 VM. With network diagnostics I get the info: default gateway is inaccessible (or something), while successfully pinging lan devices. Funny thing is... when loop a ping to lets say 8.8.8.8 I sometimes get some packet responds.


/etc/network/interfaces:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet static
address 192.168.1.193
netmask 255.255.255.0

auto eno2
iface eno2 inet manual

auto eno3
iface eno3 inet manual

auto eno4
iface eno4 inet manual

auto bond0
iface bond0 inet manual
slaves eno2 eno3 eno4
bond_miimon 100
bond_mode balance-rr

auto vmbr0
iface vmbr0 inet static
address 192.168.1.192
netmask 255.255.255.0
gateway 192.168.1.252
bridge_ports bond0
bridge_stp off
bridge_fd 0


So even with network settings shown above I'm (usually) unable to get any internet on vm's (no matter DHCP or static IP setup on vm's. I'm able to ping lan but no response on router/gateway ping).

PS:
# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether d4:ae:52:be:63:58 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.193/24 brd 192.168.1.255 scope global eno1
valid_lft forever preferred_lft forever
inet6 fe80::d6ae:52ff:febe:6358/64 scope link
valid_lft forever preferred_lft forever
3: eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether d4:ae:52:be:63:5a brd ff:ff:ff:ff:ff:ff
4: eno3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether d4:ae:52:be:63:5a brd ff:ff:ff:ff:ff:ff
5: eno4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether d4:ae:52:be:63:5a brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether d4:ae:52:be:63:5a brd ff:ff:ff:ff:ff:ff
10: tap101i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
link/ether ea:1a:b3:08:43:b4 brd ff:ff:ff:ff:ff:ff
11: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether d4:ae:52:be:63:5a brd ff:ff:ff:ff:ff:ff
inet 192.168.1.192/24 brd 192.168.1.255 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::d6ae:52ff:febe:635a/64 scope link
valid_lft forever preferred_lft forever
 
Last edited:
Hi,
Any ideas on successful PCIE passthrough ?
you must check if the iommu groups are ok and you can pass them through.

see https://pve.proxmox.com/wiki/Pci_passthrough#Verify_IOMMU_isolation
It seems even if my VMs have DHCP enabled and VM get IP from DHCP server I have no internet connection (just LAN available/browseable). I've tested connections on Win2016 VM. With network diagnostics I get the info: default gateway is inaccessible (or something), while successfully pinging lan devices. Funny thing is... when loop a ping to lets say 8.8.8.8 I sometimes get some packet responds.
This sounds like a routing problem outside PVE.
In you, configuration PVE do not touch the network traffic form the VM.
The gateway of vmbr is only for the PVE itself.
 
This sounds like a routing problem outside PVE.
In you, configuration PVE do not touch the network traffic form the VM.
The gateway of vmbr is only for the PVE itself.

OK, I've made some progress setting up network.
ENO1 is my main Proxmox interface now (with manual IP/netmask/gateway set).
Next I've created 3 vmbrs (with single eno on each, IP/netmask/gateway blank), then attached each vmbr to single VM. Not sure if it's an optimal setup but seems to work. Gave up with link aggregation setup though...

Now, my iommus:

root@*****:~# find /sys/kernel/iommu_groups/ -type l
/sys/kernel/iommu_groups/17/devices/0000:fe:02.5
/sys/kernel/iommu_groups/17/devices/0000:fe:02.3
/sys/kernel/iommu_groups/17/devices/0000:fe:02.1
/sys/kernel/iommu_groups/17/devices/0000:fe:02.4
/sys/kernel/iommu_groups/17/devices/0000:fe:02.2
/sys/kernel/iommu_groups/17/devices/0000:fe:02.0
/sys/kernel/iommu_groups/7/devices/0000:00:09.0
/sys/kernel/iommu_groups/25/devices/0000:ff:04.0
/sys/kernel/iommu_groups/25/devices/0000:ff:04.3
/sys/kernel/iommu_groups/25/devices/0000:ff:04.1
/sys/kernel/iommu_groups/25/devices/0000:ff:04.2
/sys/kernel/iommu_groups/15/devices/0000:03:00.0
/sys/kernel/iommu_groups/5/devices/0000:00:06.0
/sys/kernel/iommu_groups/23/devices/0000:ff:02.3
/sys/kernel/iommu_groups/23/devices/0000:ff:02.1
/sys/kernel/iommu_groups/23/devices/0000:ff:02.4
/sys/kernel/iommu_groups/23/devices/0000:ff:02.2
/sys/kernel/iommu_groups/23/devices/0000:ff:02.0
/sys/kernel/iommu_groups/23/devices/0000:ff:02.5
/sys/kernel/iommu_groups/13/devices/0000:01:00.1
/sys/kernel/iommu_groups/13/devices/0000:01:00.0
/sys/kernel/iommu_groups/3/devices/0000:00:04.0
/sys/kernel/iommu_groups/21/devices/0000:fe:06.1
/sys/kernel/iommu_groups/21/devices/0000:fe:06.2
/sys/kernel/iommu_groups/21/devices/0000:fe:06.0
/sys/kernel/iommu_groups/21/devices/0000:fe:06.3
/sys/kernel/iommu_groups/11/devices/0000:00:1e.0
/sys/kernel/iommu_groups/11/devices/0000:08:03.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/18/devices/0000:fe:03.4
/sys/kernel/iommu_groups/18/devices/0000:fe:03.2
/sys/kernel/iommu_groups/18/devices/0000:fe:03.0
/sys/kernel/iommu_groups/18/devices/0000:fe:03.1
/sys/kernel/iommu_groups/8/devices/0000:00:14.2
/sys/kernel/iommu_groups/8/devices/0000:00:14.0
/sys/kernel/iommu_groups/8/devices/0000:00:14.1
/sys/kernel/iommu_groups/26/devices/0000:ff:05.3
/sys/kernel/iommu_groups/26/devices/0000:ff:05.1
/sys/kernel/iommu_groups/26/devices/0000:ff:05.2
/sys/kernel/iommu_groups/26/devices/0000:ff:05.0
/sys/kernel/iommu_groups/16/devices/0000:fe:00.0
/sys/kernel/iommu_groups/16/devices/0000:fe:00.1
/sys/kernel/iommu_groups/6/devices/0000:00:07.0
/sys/kernel/iommu_groups/24/devices/0000:ff:03.4
/sys/kernel/iommu_groups/24/devices/0000:ff:03.2
/sys/kernel/iommu_groups/24/devices/0000:ff:03.0
/sys/kernel/iommu_groups/24/devices/0000:ff:03.1
/sys/kernel/iommu_groups/14/devices/0000:02:00.1
/sys/kernel/iommu_groups/14/devices/0000:02:00.0
/sys/kernel/iommu_groups/4/devices/0000:00:05.0
/sys/kernel/iommu_groups/22/devices/0000:ff:00.0
/sys/kernel/iommu_groups/22/devices/0000:ff:00.1
/sys/kernel/iommu_groups/12/devices/0000:00:1f.0
/sys/kernel/iommu_groups/2/devices/0000:00:03.0
/sys/kernel/iommu_groups/20/devices/0000:fe:05.3
/sys/kernel/iommu_groups/20/devices/0000:fe:05.1
/sys/kernel/iommu_groups/20/devices/0000:fe:05.2
/sys/kernel/iommu_groups/20/devices/0000:fe:05.0
/sys/kernel/iommu_groups/10/devices/0000:00:1d.0
/sys/kernel/iommu_groups/10/devices/0000:00:1d.7
/sys/kernel/iommu_groups/10/devices/0000:00:1d.1
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/19/devices/0000:fe:04.2
/sys/kernel/iommu_groups/19/devices/0000:fe:04.0
/sys/kernel/iommu_groups/19/devices/0000:fe:04.3
/sys/kernel/iommu_groups/19/devices/0000:fe:04.1
/sys/kernel/iommu_groups/9/devices/0000:00:1a.0
/sys/kernel/iommu_groups/9/devices/0000:00:1a.7
/sys/kernel/iommu_groups/9/devices/0000:00:1a.1
/sys/kernel/iommu_groups/27/devices/0000:ff:06.2
/sys/kernel/iommu_groups/27/devices/0000:ff:06.0
/sys/kernel/iommu_groups/27/devices/0000:ff:06.3
/sys/kernel/iommu_groups/27/devices/0000:ff:06.1


My CPUs are: 2x Intel Xeon X5670 - those should fully support passthrough setup.
 
My CPUs are: 2x Intel Xeon X5670 - those should fully support passthrough setup.
You can only use the hole iommu group and not a single menber of the group.
If the dev is not isolated in an iommu group your CPU can support iommu but it is nothing worth.
The iommu goups are created by the Mainbord vendor (bios and HW layout).

The output you have send is not useful.
Pleas use this script to get a sorted and meaningfull output.

Code:
#!/bin/bash
shopt -s nullglob
for d in /sys/kernel/iommu_groups/*/devices/*; do
    n=${d#*/iommu_groups/*}; n=${n%%/*}
    printf 'IOMMU Group %s ' "$n"
    lspci -nns "${d##*/}"
done;
 
Here's script output:
Code:
root@*****:/home/tools# ./iommu.sh
IOMMU Group 0 00:00.0 Host bridge [0600]: Intel Corporation 5520 I/O Hub to ESI Port [8086:3406] (rev 13)
IOMMU Group 10 00:1d.0 USB controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 [8086:2934] (rev 02)
IOMMU Group 10 00:1d.1 USB controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 [8086:2935] (rev 02)
IOMMU Group 10 00:1d.7 USB controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 [8086:293a] (rev 02)
IOMMU Group 11 00:1e.0 PCI bridge [0604]: Intel Corporation 82801 PCI Bridge [8086:244e] (rev 92)
IOMMU Group 11 08:03.0 VGA compatible controller [0300]: Matrox Electronics Systems Ltd. MGA G200eW WPCM450 [102b:0532] (rev 0a)
IOMMU Group 12 00:1f.0 ISA bridge [0601]: Intel Corporation 82801IB (ICH9) LPC Interface Controller [8086:2918] (rev 02)
IOMMU Group 13 01:00.0 Ethernet controller [0200]: Broadcom Limited NetXtreme II BCM5709 Gigabit Ethernet [14e4:1639] (rev 20)
IOMMU Group 13 01:00.1 Ethernet controller [0200]: Broadcom Limited NetXtreme II BCM5709 Gigabit Ethernet [14e4:1639] (rev 20)
IOMMU Group 14 02:00.0 Ethernet controller [0200]: Broadcom Limited NetXtreme II BCM5709 Gigabit Ethernet [14e4:1639] (rev 20)
IOMMU Group 14 02:00.1 Ethernet controller [0200]: Broadcom Limited NetXtreme II BCM5709 Gigabit Ethernet [14e4:1639] (rev 20)
IOMMU Group 15 03:00.0 RAID bus controller [0104]: LSI Logic / Symbios Logic MegaRAID SAS 2108 [Liberator] [1000:0079] (rev 05)
IOMMU Group 16 fe:00.0 Host bridge [0600]: Intel Corporation Xeon 5600 Series QuickPath Architecture Generic Non-core Registers [8086:2c70] (rev 02)
IOMMU Group 16 fe:00.1 Host bridge [0600]: Intel Corporation Xeon 5600 Series QuickPath Architecture System Address Decoder [8086:2d81] (rev 02)
IOMMU Group 17 fe:02.0 Host bridge [0600]: Intel Corporation Xeon 5600 Series QPI Link 0 [8086:2d90] (rev 02)
IOMMU Group 17 fe:02.1 Host bridge [0600]: Intel Corporation Xeon 5600 Series QPI Physical 0 [8086:2d91] (rev 02)
IOMMU Group 17 fe:02.2 Host bridge [0600]: Intel Corporation Xeon 5600 Series Mirror Port Link 0 [8086:2d92] (rev 02)
IOMMU Group 17 fe:02.3 Host bridge [0600]: Intel Corporation Xeon 5600 Series Mirror Port Link 1 [8086:2d93] (rev 02)
IOMMU Group 17 fe:02.4 Host bridge [0600]: Intel Corporation Xeon 5600 Series QPI Link 1 [8086:2d94] (rev 02)
IOMMU Group 17 fe:02.5 Host bridge [0600]: Intel Corporation Xeon 5600 Series QPI Physical 1 [8086:2d95] (rev 02)
IOMMU Group 18 fe:03.0 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Registers [8086:2d98] (rev 02)
IOMMU Group 18 fe:03.1 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Target Address Decoder [8086:2d99] (rev 02)
IOMMU Group 18 fe:03.2 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller RAS Registers [8086:2d9a] (rev 02)
IOMMU Group 18 fe:03.4 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Test Registers [8086:2d9c] (rev 02)
IOMMU Group 19 fe:04.0 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Control [8086:2da0] (rev 02)
IOMMU Group 19 fe:04.1 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Address [8086:2da1] (rev 02)
IOMMU Group 19 fe:04.2 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Rank [8086:2da2] (rev 02)
IOMMU Group 19 fe:04.3 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Thermal Control [8086:2da3] (rev 02)
IOMMU Group 1 00:01.0 PCI bridge [0604]: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 1 [8086:3408] (rev 13)
IOMMU Group 20 fe:05.0 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Control [8086:2da8] (rev 02)
IOMMU Group 20 fe:05.1 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Address [8086:2da9] (rev 02)
IOMMU Group 20 fe:05.2 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Rank [8086:2daa] (rev 02)
IOMMU Group 20 fe:05.3 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Thermal Control [8086:2dab] (rev 02)
IOMMU Group 21 fe:06.0 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Control [8086:2db0] (rev 02)
IOMMU Group 21 fe:06.1 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Address [8086:2db1] (rev 02)
IOMMU Group 21 fe:06.2 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Rank [8086:2db2] (rev 02)
IOMMU Group 21 fe:06.3 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Thermal Control [8086:2db3] (rev 02)
IOMMU Group 22 ff:00.0 Host bridge [0600]: Intel Corporation Xeon 5600 Series QuickPath Architecture Generic Non-core Registers [8086:2c70] (rev 02)
IOMMU Group 22 ff:00.1 Host bridge [0600]: Intel Corporation Xeon 5600 Series QuickPath Architecture System Address Decoder [8086:2d81] (rev 02)
IOMMU Group 23 ff:02.0 Host bridge [0600]: Intel Corporation Xeon 5600 Series QPI Link 0 [8086:2d90] (rev 02)
IOMMU Group 23 ff:02.1 Host bridge [0600]: Intel Corporation Xeon 5600 Series QPI Physical 0 [8086:2d91] (rev 02)
IOMMU Group 23 ff:02.2 Host bridge [0600]: Intel Corporation Xeon 5600 Series Mirror Port Link 0 [8086:2d92] (rev 02)
IOMMU Group 23 ff:02.3 Host bridge [0600]: Intel Corporation Xeon 5600 Series Mirror Port Link 1 [8086:2d93] (rev 02)
IOMMU Group 23 ff:02.4 Host bridge [0600]: Intel Corporation Xeon 5600 Series QPI Link 1 [8086:2d94] (rev 02)
IOMMU Group 23 ff:02.5 Host bridge [0600]: Intel Corporation Xeon 5600 Series QPI Physical 1 [8086:2d95] (rev 02)
IOMMU Group 24 ff:03.0 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Registers [8086:2d98] (rev 02)
IOMMU Group 24 ff:03.1 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Target Address Decoder [8086:2d99] (rev 02)
IOMMU Group 24 ff:03.2 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller RAS Registers [8086:2d9a] (rev 02)
IOMMU Group 24 ff:03.4 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Test Registers [8086:2d9c] (rev 02)
IOMMU Group 25 ff:04.0 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Control [8086:2da0] (rev 02)
IOMMU Group 25 ff:04.1 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Address [8086:2da1] (rev 02)
IOMMU Group 25 ff:04.2 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Rank [8086:2da2] (rev 02)
IOMMU Group 25 ff:04.3 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Thermal Control [8086:2da3] (rev 02)
IOMMU Group 26 ff:05.0 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Control [8086:2da8] (rev 02)
IOMMU Group 26 ff:05.1 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Address [8086:2da9] (rev 02)
IOMMU Group 26 ff:05.2 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Rank [8086:2daa] (rev 02)
IOMMU Group 26 ff:05.3 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Thermal Control [8086:2dab] (rev 02)
IOMMU Group 27 ff:06.0 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Control [8086:2db0] (rev 02)
IOMMU Group 27 ff:06.1 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Address [8086:2db1] (rev 02)
IOMMU Group 27 ff:06.2 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Rank [8086:2db2] (rev 02)
IOMMU Group 27 ff:06.3 Host bridge [0600]: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Thermal Control [8086:2db3] (rev 02)
IOMMU Group 2 00:03.0 PCI bridge [0604]: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 3 [8086:340a] (rev 13)
IOMMU Group 3 00:04.0 PCI bridge [0604]: Intel Corporation 5520/X58 I/O Hub PCI Express Root Port 4 [8086:340b] (rev 13)
IOMMU Group 4 00:05.0 PCI bridge [0604]: Intel Corporation 5520/X58 I/O Hub PCI Express Root Port 5 [8086:340c] (rev 13)
IOMMU Group 5 00:06.0 PCI bridge [0604]: Intel Corporation 5520/X58 I/O Hub PCI Express Root Port 6 [8086:340d] (rev 13)
IOMMU Group 6 00:07.0 PCI bridge [0604]: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 7 [8086:340e] (rev 13)
IOMMU Group 7 00:09.0 PCI bridge [0604]: Intel Corporation 7500/5520/5500/X58 I/O Hub PCI Express Root Port 9 [8086:3410] (rev 13)
IOMMU Group 8 00:14.0 PIC [0800]: Intel Corporation 7500/5520/5500/X58 I/O Hub System Management Registers [8086:342e] (rev 13)
IOMMU Group 8 00:14.1 PIC [0800]: Intel Corporation 7500/5520/5500/X58 I/O Hub GPIO and Scratch Pad Registers [8086:3422] (rev 13)
IOMMU Group 8 00:14.2 PIC [0800]: Intel Corporation 7500/5520/5500/X58 I/O Hub Control Status and RAS Registers [8086:3423] (rev 13)
IOMMU Group 9 00:1a.0 USB controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 [8086:2937] (rev 02)
IOMMU Group 9 00:1a.1 USB controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 [8086:2938] (rev 02)
IOMMU Group 9 00:1a.7 USB controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 [8086:293c] (rev 02)

BTW: If I set multiple VM's sharing same virtual bridge (and all VM's are in same network) what speed transfer can I get between them? Network adapers are set as VirtIO model, virtual network adapters under VMs show 10Gbps speed. So, is network traffic (inside host itself) relying on physical network limitations (lets say LAN infrastructure is only 100Mbps or 1000Mbps)?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!