ambad4u

Renowned Member
Jan 2, 2016
7
0
66
46
greetings to all,

I am currently evaluating proxmox and I'm able to create "regular" VM's without problems.

Going past creating regular VM's, I'm trying to passthrough some intel ethernet card(s) and I will have problems if:
-- two physical intel ethernet cards are passthrough to a single guest
-- a single physical intel ethernet card is passthrough to a single guest and another guest with the same config

note:
no problems if only 1 physical device is passedthrough on a single VM and nothing else is passedthrough.

my hardware:
e3-1220l v2
asrock p67 exteme4

another info:
intel_iommu=on
vfio loaded on boot
added entry to /etc/modprobe.d/vfio.conf with "options vfio-pci ids=8086:109a,8086:109a,8086:109a"

lspci ethernet info
Code:
01:00.0 Ethernet controller: Intel Corporation 82573L Gigabit Ethernet Controller (rev 03)
02:00.0 Ethernet controller: Intel Corporation 82573L Gigabit Ethernet Controller
03:00.0 Ethernet controller: Intel Corporation 82573L Gigabit Ethernet Controller (rev 03)
0d:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06)



scenario 1: single VM with 2 physical nic passedthrough
guest config:
Code:
boot: d
cores: 2
cpu: host
hostpci0: 01:00.0,pcie=1
hostpci1: 02:00.0,pcie=1
ide2: local:iso/asg-9.314-13.1.iso,media=cdrom,size=743448K
machine: q35
memory: 4096
name: sophos
net0: virtio=62:33:35:66:36:63,bridge=vmbr0
numa: 0
ostype: l26
smbios1: uuid=1dc7bc8d-b6d0-4eb8-aa58-5531a2e8e5ab
sockets: 1
startup: order=1,up=180
virtio0: lvm_50g:vm-100-disk-1,discard=on,backup=no,iothread=on,size=32G



scenario 2: single physical nic passedthrough in a single guest VM
(2x physical individual intel nics passedthrough)
1st guest config: (same as above but modified)
Code:
boot: d
cores: 2
cpu: host
hostpci0: 01:00.0,pcie=1
ide2: local:iso/asg-9.314-13.1.iso,media=cdrom,size=743448K
machine: q35
memory: 4096
name: sophos
net0: virtio=62:33:35:66:36:63,bridge=vmbr0
numa: 0
ostype: l26
smbios1: uuid=1dc7bc8d-b6d0-4eb8-aa58-5531a2e8e5ab
sockets: 1
startup: order=1,up=180
virtio0: lvm_50g:vm-100-disk-1,discard=on,backup=no,iothread=on,size=32G

2nd guest VM config:
Code:
bootdisk: ide0
cores: 1
ide0: local:101/vm-101-disk-1.qcow2,size=1G
ide2: local:iso/asg-9.314-13.1.iso,media=cdrom
memory: 1024
name: test2
net0: e1000=32:31:39:65:33:63,bridge=vmbr0
numa: 0
ostype: l26
smbios1: uuid=ec3ec39e-4d9c-469f-9655-d0fe4352bc6c
sockets: 1
machine: q35
hostpci0: 03:00.0,pcie=1



both will give same output in dmesg:
Code:
vfio-pci 0000:01:00.0: enabling device (0000 -> 0003)
...
vfio-pci 0000:03:00.0: enabling device (0000 -> 0003)
...
...
vfio_bar_restore: 0000:03:00.0 reset recovery - restoring bars


VM's will work fine BUT it seems that the vfio-pci assignment are assinged on the same place/spot! and therefore the 2nd ethernet card booted will not work even though it is seen without problems.
(2nd card can be seen but no connectivity inside guest as its assignment seems to be hindered by the first card)

any insights?
if you need further info, just let me know.
thanks in advance!
 
This will most likely not answer your question but might become helpfull;

- Why are you trying to pass physical nics to the VM ? (i can understand this for a WLan controller). Example usecase might make this easier to comprehend
- If its for performance reasons (native linux bridging), have you tried openvswitch ? If not, have a look here: https://pve.proxmox.com/wiki/Open_vSwitch
- Are your nics in the same iommu group ?
Code:
 find /sys/kernel/iommu_groups/ -type l
if so, check this: http://vfio.blogspot.de/2014/08/iommu-groups-inside-and-out.html
 
hello Q-wulf

the reason I'm trying to passthrough 'devices' to VMS are:
-- trying to experiment
-- to have the VM have exclusive rights to a device
-- and why nics are passedthrough, because its for use for a UTM appliance and 1 VM that serves diskless workstations

I know some will argue that passing nics to VM's will only have 'slight' improvements?

and with OVS, I'll be trying to use that later, after these nics/devices are properly passedthrough

and lastly, in regards to IOMMU groups, I somewhat overlooked at these :(, 2 of the nics indeed shared the same group :(, I'll try to move them around.
Code:
/sys/kernel/iommu_groups/1/devices/0000:01:00.0
/sys/kernel/iommu_groups/1/devices/0000:02:00.0
...
/sys/kernel/iommu_groups/10/devices/0000:03:00.0

but even with scenario #2, which have different IOMMU groups, it is still having quirks.
 
-- to have the VM have exclusive rights to a device
-- and why nics are passedthrough, because its for use for a UTM appliance and 1 VM that serves diskless workstations

I'd do this using to OVS based vmbrX's.
eth0 -> Bond0 -> Vmbr0
eth1 -> Bond0
eth2 -> Vmbr1
eth3 -> Vmbr2

Then on the VM's you do this:
VM1: vNic1 -> Vmbr1 (your UTM appliance)
VM2: vNic1 -> Vmbr2 (your Diskless Workstation Provider)
Other Vm: vNic1 -> Vmbr0
Other Vm: vNic2 -> Vmbr0

on the Proxmox node you do this:
ProxmoxNode (vNic1) -> OVS_intPort1 -> Vmbr0
ProxmoxNode (vNic2) -> OVS_intPort2 -> Vmbr0

And you have exclusive access to the nic from said VM. No need to fiddle with passthrough.

We still use this on a couple of Proxmox/Ceph nodes, where we assigned a virtual openMediaVault exclusive access to 1x40G (vmbr3), while the Proxmox Node runs Ceph via 2x40G (vmbr1 public) and 1x40G (vmbr2 Cluster) and Proxmox runs via 2x10G (vmbr0)

Normally tho (since moving to Software Defined Networking - SDN), we just Bond all nics (e.g. 4x40G) into a single OVS_Bond attached to a single OVS_bridge (vmbr0) and have a SDN controller take care of fine-grained QOS (rate-limit + bursts) directly on the Bridge (vmbr0). A lot more efficient that way.

I know some will argue that passing nics to VM's will only have 'slight' improvements?
Afaik that "improvement" is negligible compared to using openvswitch, if i had the choice, i'd rather not deal with passing through a nic. This is especially true if you do your QOS not via "links and separate switches", but rather a "Layer3 Switch" or a "SDN-Controller". But then i am not a network guy.
 
Last edited:
thanks for the quick response sir!

anyways, what you have suggested will be my final approach just in case I give up on nic passthrough.

also, Off Topic, I might need to open a new topic on this one, anyways, since you have experience on those bonding, have you tried bonding nics without support for lacp switch? and does this increase a bit of 'performance'?
 
I can not say that i have tried that. We run work, lab and my homelab using OVS-based LACP:Balance-RR and (Layer3-Switches). There was never a question about this, ever.

If i remember correctly, and did not completely snooze through my network refreshing course, you should generally be able to use active-backup, balance-tcp or balance-slb without any special Switch requirements ( as in Port Grouping called LAG, Ethernetchannel or Trunk group depending on vendor).

But do not nail me on that.

Edit: the OVS-approach also will make your "VM" easily migrate-able (live or via HA without user-input), should you have multiple Proxmox-nodes in one cluster, as opposed the passthrough approach. a big bonus in my book.

Edit2: Bonding (depending on the mode) makes you use all your links more efficiently. The least amount of OVS-Bridges you can use (depends on how you do QOS) in the end, provides the most efficiency with regards to your available bandwith capacities.
 
Last edited:
I think I added a second edit as you typed this.
Edit2: Bonding (depending on the mode) makes you use all your links more efficiently. The least amount of OVS-Bridges you can use (depends on how you do QOS) in the end, provides the most efficiency with regards to your available bandwith capacities.

anyways, good luck with passthrough :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!