Adding a new veth with ipv4 on LXC containers does not add it in network/interfaces (Proxmox 6)

zorrobiwan

Member
Jun 10, 2020
25
3
8
51
Hi,
I've a cluster of 5 Proxmox 6 hosts (I know, I've to upgrade them to v7)

since years on these hosts, I was able to add new veth on containers using the GUI and get the IP up and running on the container.
Today, when I add the veth (let say eth1, with its mac address, the ipv4/32 and the default getway), the IP is not up and running and there is no config for eth1 in /etc/network/interfaces but the config is there when doing a pct config xxx

Code:
>>pct config 99999
arch: amd64
cpulimit: 1
description:  Mounting fuse (for snap squashfs)%0A Mount cgroup in rw to get snaps working%0A
features: nesting=1,fuse=1
hostname: ***
memory: 2048
net0: name=eth0,bridge=vmbr0,firewall=1,gw=xx.xx.67.254,hwaddr=02:00:00:81:1f:43,ip=yy.yy.42.18/32,type=veth
net1: name=eth1,bridge=vmbr0,firewall=1,gw=xx.xx.67.254,hwaddr=02:00:00:bf:cf:0f,ip=zz.zz.18.80/32,type=veth
onboot: 0
ostype: debian
parent: autohourly211013130512
rootfs: zfs:subvol-99999-disk-0,size=50G
swap: 0
unprivileged: 1
lxc.mount.entry: /dev/fuse dev/fuse none bind,create=file,optional
lxc.mount.auto: cgroup:rw


Code:
>>cat /etc/network/interfaces
auto lo
iface lo inet loopback


auto eth0
iface eth0 inet static
        address yy.yy.42.18/32
# --- BEGIN PVE ---
        post-up ip route add xx.xx.67.254 dev eth0
        post-up ip route add default via xx.xx.67.254 dev eth0
        pre-down ip route del default via xx.xx.67.254 dev eth0
        pre-down ip route del xx.xx.67.254 dev eth0
# --- END PVE ---


Code:
>>ip a && ip r
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if200: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 02:00:00:81:1f:43 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet yy.yy.42.18/32 brd yy.yy.42.18 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 xxx scope link
       valid_lft forever preferred_lft forever
204: eth1@if205: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 02:00:00:bf:cf:0f brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 xxx scope link
       valid_lft forever preferred_lft forever
default via xx.xx.67.254 dev eth0
xx.xx.67.254 dev eth0 scope link



Then if I reboot the container

Code:
>>ip a && ip r
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if209: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 02:00:00:81:1f:43 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet yy.yy.42.18/32 brd yy.yy.42.18 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 xxx scope link
       valid_lft forever preferred_lft forever
3: eth1@if213: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 02:00:00:bf:cf:0f brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet zz.zz.18.80/32 brd zz.zz.18.80 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 xxx scope link
       valid_lft forever preferred_lft forever
default via xx.xx.67.254 dev eth0
xx.xx.67.254 dev eth0 scope link

IP is there, up and running


Code:
>>cat /etc/network/interfaces
auto lo
iface lo inet loopback


auto eth0
iface eth0 inet static
    address yy.yy.42.18/32
# --- BEGIN PVE ---
    post-up ip route add xx.xx.67.254 dev eth0
    post-up ip route add default via xx.xx.67.254 dev eth0
    pre-down ip route del default via xx.xx.67.254 dev eth0
    pre-down ip route del xx.xx.67.254 dev eth0
# --- END PVE ---


auto eth1
iface eth1 inet static
    address zz.zz.18.80/32
# --- BEGIN PVE ---
    post-up ip route add xx.xx.67.254 dev eth1
    post-up ip route add default via xx.xx.67.254 dev eth1
    pre-down ip route del default via xx.xx.67.254 dev eth1
    pre-down ip route del xx.xx.67.254 dev eth1
# --- END PVE ---



After removing eth1 using the GUI

Code:
>>ip a && ip r
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if209: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 02:00:00:81:1f:43 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet yy.yy.42.18/32 brd yy.yy.42.18 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 xxx scope link
       valid_lft forever preferred_lft forever
default via xx.xx.67.254 dev eth0
xx.xx.67.254 dev eth0 scope link


eth1 is no more there but

Code:
>>cat /etc/network/interfaces
auto lo
iface lo inet loopback


auto eth0
iface eth0 inet static
    address yy.yy.42.18/32
# --- BEGIN PVE ---
    post-up ip route add xx.xx.67.254 dev eth0
    post-up ip route add default via xx.xx.67.254 dev eth0
    pre-down ip route del default via xx.xx.67.254 dev eth0
    pre-down ip route del xx.xx.67.254 dev eth0
# --- END PVE ---


auto eth1
iface eth1 inet static
    address zz.zz.18.80/32
# --- BEGIN PVE ---
    post-up ip route add xx.xx.67.254 dev eth1
    post-up ip route add default via xx.xx.67.254 dev eth1
    pre-down ip route del default via xx.xx.67.254 dev eth1
    pre-down ip route del xx.xx.67.254 dev eth1
# --- END PVE ---

Even if I reboot, eth1 still in network/interfaces and I have to remove it by hand.

I have the same behaviour on all my hosts

Code:
pveversion --verbose
proxmox-ve: 6.4-1 (running kernel: 5.4.119-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-6
pve-kernel-helper: 6.4-6
pve-kernel-5.4.140-1-pve: 5.4.140-1
pve-kernel-5.4.128-1-pve: 5.4.128-2
pve-kernel-5.4.124-1-pve: 5.4.124-2
pve-kernel-5.4.119-1-pve: 5.4.119-1
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.13-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-1
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
pve-zsync: 2.2
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.5-pve1~bpo10+1

Thanks for your help
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!