Hi,
I have a bare metal server hosted by OVH.
This server has Proxmox VE 8.1.10 installed.
This server has 4 NICs.
	
	
	
		
I have two ip addresses for this server. I will reference them as <IP_MGNT> and <IP_PROXY>.
<IP_MGNT> is my management ip address directly connected to my server and will be used to manage my Proxmox server.
<IP_PROXY> is the ip address that I want to bind to an HA Proxy VM to expose all my services to the internet through reverse-proxy.
I have followed instructions to configure IOMMU on my server.
	
	
	
		
I execute update-grub
I modified /etc/modules with the following content
	
	
	
		
I rebooted my server
I ran the command dmesg | grep -e DMAR -e IOMMU to check if IOMMU was enabled
	
	
	
		
I checked and my NICs have different iommu groups.
I modified /etc/modprobe.d/vfio.conf with the following content
	
	
	
		
I modified /etc/network/interfaces to remove config related to this NIC
	
	
	
		
I rebooted my server
I added the NIC to the VM through the UI
In /etc/pve/qemu-server/<VM_ID>.conf I have the line:
	
	
	
		
In my VM (Ubuntu 22.04), I ran ip link show
	
	
	
		
I modified my netplan configuration, applied and reboot my VM
	
	
	
		
I can see on my host that Kernel driver in use: vfio-pci but my network interface stays down in the VM.
What did I made wrong?
				
			Context:
I have a bare metal server hosted by OVH.
This server has Proxmox VE 8.1.10 installed.
This server has 4 NICs.
		Code:
	
	# lspci -nnk | grep -A 5 "Ethernet"
03:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T [8086:15ad]
    DeviceName:  Intel X557-AT2 Ethernet #1
    Subsystem: Super Micro Computer Inc Ethernet Connection X552/X557-AT 10GBASE-T [15d9:15ad]
    Kernel driver in use: ixgbe
    Kernel modules: ixgbe
03:00.1 Ethernet controller [0200]: Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T [8086:15ad]
    DeviceName:  Intel X557-AT2 Ethernet #2
    Subsystem: Super Micro Computer Inc Ethernet Connection X552/X557-AT 10GBASE-T [15d9:15ad]
    Kernel driver in use: vfio-pci
    Kernel modules: ixgbe
07:00.0 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
    DeviceName:  Intel i350 Ethernet #1
    Subsystem: Super Micro Computer Inc I350 Gigabit Network Connection [15d9:1521]
    Kernel driver in use: igb
    Kernel modules: igb
07:00.1 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
    DeviceName:  Intel i350 Ethernet #2
    Subsystem: Super Micro Computer Inc I350 Gigabit Network Connection [15d9:1521]
    Kernel driver in use: igb
    Kernel modules: igbI have two ip addresses for this server. I will reference them as <IP_MGNT> and <IP_PROXY>.
<IP_MGNT> is my management ip address directly connected to my server and will be used to manage my Proxmox server.
<IP_PROXY> is the ip address that I want to bind to an HA Proxy VM to expose all my services to the internet through reverse-proxy.
I have followed instructions to configure IOMMU on my server.
What I did until now:
I modified /etc/default/grub to enable iommu by adding intel_iommu=on
		Code:
	
	GRUB_CMDLINE_LINUX="nomodeset iommu=pt console=tty0 console=ttyS1,115200n8 intel_iommu=on"I execute update-grub
I modified /etc/modules with the following content
		Code:
	
	# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfdI rebooted my server
I ran the command dmesg | grep -e DMAR -e IOMMU to check if IOMMU was enabled
		Code:
	
	[    0.147433] DMAR: IOMMU enabledI checked and my NICs have different iommu groups.
I modified /etc/modprobe.d/vfio.conf with the following content
		Code:
	
	options vfio-pci ids=03:00.1I modified /etc/network/interfaces to remove config related to this NIC
		Code:
	
	# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!
auto lo
iface lo inet loopback
auto eno1
iface eno1 inet static
    address <IP_MGNT>/24
    gateway <GATEWAY>
#Management
#iface eno2 inet manual <= removed
iface eno3 inet manual
iface eno4 inet manual
auto vmbr0
iface vmbr0 inet static
    address 10.10.10.1/24
    bridge-ports none
    bridge-stp off
    bridge-fd 0
#DefaultI rebooted my server
I added the NIC to the VM through the UI
In /etc/pve/qemu-server/<VM_ID>.conf I have the line:
		Code:
	
	hostpci0: 0000:03:00.1In my VM (Ubuntu 22.04), I ran ip link show
		Code:
	
	1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether bc:24:11:74:fa:38 brd ff:ff:ff:ff:ff:ff
    altname enp0s18
3: ens16: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
    link/ether 0c:c4:7a:7b:5c:63 brd ff:ff:ff:ff:ff:ff
    altname enp0s16I modified my netplan configuration, applied and reboot my VM
		YAML:
	
	# This is the network config written by 'subiquity'
network:
  renderer: networkd
  ethernets:
    ens18:
      addresses:
        - 10.10.10.2/24
    ens16:
      addresses:
        - <IP_PROXY>/24
      nameservers:
        addresses:
          - 4.2.2.2
          - 8.8.8.8
      routes:
        - to: default
          via: <GATEWAY>
  version: 2Result:
I can see on my host that Kernel driver in use: vfio-pci but my network interface stays down in the VM.
What did I made wrong?
 
	