SR-IOV VF on LXC containers ?

gretz

New Member
Jan 28, 2022
12
3
3
40
I was wondering if this would be possible with containers ?

My goal is to use a network card VF inside a container
 
nope the virtual interfaces do not appear as network cards for the Host, just as pci-e ports :

ip link show enp1s0 3: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether a3:36:9f:54:3g:10 brd ff:ff:ff:ff:ff:ff vf 0 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off, query_rss off vf 1 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off, query_rss off vf 2 link/ether 26:cd:62:48:94:0d brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off, query_rss off vf 3 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off, query_rss off

Since we cant passthrough pci-e ports on the proxmox webui for containers I was wondering if it was possible manually by fiddling on the conf ?
 
I have not touched sriov in few years, however I am confident that when properly configured they are seen as eth/enp/whatever devices by OS - thats the point.

The steps and functionality _heavily_ depend on your specific NIC, some vendors have terrible implementations and it just doesnt work.

Here is a boot script I created back in the day, hope it helps you.

Code:
#!/bin/sh -x
#sleep 20
SDS=01
set_numfs()
{
    local dev=$1
    local num=$2
    local count=0

    echo "$num" > "/sys/class/net/${dev}/device/sriov_numvfs"
    sleep 1
    num=$(<"/sys/class/net/${dev}/device/sriov_numvfs")
    echo "$dev: $num"
   
    return 0
}

clear_numfs()
{
    local dev=$1
    echo 0 > "/sys/class/net/${dev}/device/sriov_numvfs"
}

set_mac()
{
    local dev=$1
    local mac=$2

    ip link set "$dev" address "$mac"
    ret=$?
    if [[ $ret -ne 0 ]]; then
        >&2 echo "$dev: ip link set address failed: $ret"
        return $ret
    fi

    return 0
}

set_vmac()
{
    local dev=$1
    local vf=$2
    local mac=$3

    ip link set  "$dev" vf $vf mac "$mac"
    ret=$?
    if [[ $ret -ne 0 ]]; then
        >&2 echo "$dev: ip link set address failed: $ret"
        return $ret
    fi

    return 0
}

echo Enable 4 VFs
set_numfs em1 5
set_numfs em2 5
ip link

#echo set MAC on VF interfaces
#set_mac em1_0  02:BB:00:04:01:00
#set_mac em2_0  02:BB:00:04:02:00
#ip link

#echo set MAC on vf functions
set_vmac em1 0 02:BB:00:$SDS:01:00
set_vmac em2 0 02:BB:00:$SDS:02:00
set_vmac em1 1 02:BB:00:$SDS:01:01
set_vmac em2 1 02:BB:00:$SDS:02:01
set_vmac em1 2 02:BB:00:$SDS:01:02
set_vmac em2 2 02:BB:00:$SDS:02:02
set_vmac em1 3 02:BB:00:$SDS:01:03
set_vmac em2 3 02:BB:00:$SDS:02:03
set_vmac em1 4 02:BB:00:$SDS:01:04
set_vmac em2 4 02:BB:00:$SDS:02:04
ip link

sleep 2
ifup be0



Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
Thanks for the script but I though the point of SR-IOV was to passthrought the device without the Host interfering with it so that is why it did not surprise me to not see the network card
 
I think this only possible with LXD or KVM not with LXC since you can't pass a PCIe device to a LXC container. For docker there is pipewire. Correct me if I'm wrong.