Physical NIC assignment for LXC containers in Proxmox 7.2

Big Hornet

New Member
May 24, 2022
4
0
1
Hello to all , thx for your help ,

I've just joined the forum, and looking forward to interactions here.

I'd really like to be able to connect one of my containers to a physical NIC on the host (without bridging). In a standard LXC config。

/etc/pve/nodes/pve/lxc/103.conf

arch: amd64
cores: 1
features: nesting=1
hostname: test
memory: 512
ostype: alpine
rootfs: local-lvm:vm-103-disk-0,size=1G
swap: 512
unprivileged: 1
lxc.net.1.link: enp4s0
lxc.net.1.type: phys
lxc.net.1.flags: up
lxc.net.1.name: eth1




Task viewer: CT 103 - start


netdev_configure_server_phys: 1163 No such file or directory - No link for physical interface specified
lxc_create_network_priv: 3413 No such file or directory - Failed to create network device
lxc_spawn: 1843 Failed to create the network
__lxc_start: 2074 Failed to spawn container "103"
TASK ERROR: startup for container '103' failed




CPU(s)

4 x Intel(R) Celeron(R) J4125 CPU @ 2.00GHz (1 Socket)
Kernel Version

Linux 5.15.35-1-pve #1 SMP PVE 5.15.35-3 (Wed, 11 May 2022 07:57:51 +0200)
PVE Manager Version

pve-manager/7.2-4/ca9d43c


 

bobmc

Well-Known Member
May 17, 2018
534
85
48
65
Just curiosity.. what are your reasons for not using the bridges?
 

Big Hornet

New Member
May 24, 2022
4
0
1
Hi, can you post the output of ip link?
root@pve:~# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
4: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether c4:83:4f:12:04:1e brd ff:ff:ff:ff:ff:ff
5: enp5s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq master vmbr0 state DOWN mode DEFAULT group default qlen 1000
link/ether c4:83:4f:12:04:1f brd ff:ff:ff:ff:ff:ff
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether c4:83:4f:12:04:1f brd ff:ff:ff:ff:ff:ff
8: tap101i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 3e:22:50:c4:3e:81 brd ff:ff:ff:ff:ff:ff
10: veth106i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether fe:40:d0:41:06:d6 brd ff:ff:ff:ff:ff:ff link-netnsid 1
51: veth102i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether fe:45:17:56:c7:0b brd ff:ff:ff:ff:ff:ff link-netnsid 3
52: veth108i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether fe:f8:dc:61:94:f3 brd ff:ff:ff:ff:ff:ff link-netnsid 0
54: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 26:f9:a9:75:54:31 brd ff:ff:ff:ff:ff:ff
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!