Persistent interface naming via systemd breaks VLAN tagging

Jun 8, 2016
344
74
93
48
Johannesburg, South Africa
I have a new Proxmox 4.2 cluster where network interfaces are picked up out of order. I followed guidance to create .link systemd file to resolve this:
/etc/systemd/network/10-eth0.link:
Code:
[Match]
MACAddress=00:1e:67:fd:06:bc

[Link]
Name=eth0
/etc/systemd/network/10-eth1.link:
Code:
[Match]
MACAddress=00:1e:67:fd:06:bd

[Link]
Name=eth1
/etc/systemd/network/10-eth2.link:
Code:
[Match]
MACAddress=00:1e:67:9b:f1:38

[Link]
Name=eth2
/etc/systemd/network/10-eth3.link:
Code:
[Match]
MACAddress=00:1e:67:9b:f1:39

[Link]
Name=eth3

I also, as per recommendations, removed /etc/udev/rules.d/70-persistent-net.rules as this conflicts with the above. My network interfaces are now detected correctly. I have the following network configuration file which yields the desired results:
/etc/network/interfaces
Code:
auto lo
iface lo inet loopback

auto bond0
iface bond0 inet manual
        slaves eth0,eth1
        bond_miimon 100
        bond_mode 802.3ad
        bond_lacp_rate 1
        mtu 9216

auto bond1
iface bond1 inet static
        address 10.254.1.2
        netmask  255.255.255.0
        slaves eth2,eth3
        bond_miimon 100
        bond_mode 802.3ad
        bond_lacp_rate 1
        mtu 9216

auto eth0
iface eth0 inet manual
        bond-master bond0
        mtu 9216

auto eth1
iface eth1 inet manual
        bond-master bond0
        mtu 9216

auto eth2
iface eth2 inet manual
        bond-master bond1
        mtu 9216

auto eth3
iface eth3 inet manual
        bond-master bond1
        mtu 9216

auto vmbr0
iface vmbr0 inet static
        address 198.19.17.22
        netmask 255.255.255.240
        gateway 198.19.17.17
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0
        mtu 9216

bond1 is for Ceph storage replication and virtuals should be associated with vmbr0, which utilises bond0.

I can successfully starts virtuals by not specifying a VLAN tag but the moment I set a virtual network interface to be associated with a VLAN tag, as here:
/etc/pve/nodes/kvm5a/qemu-server/101.conf:
Code:
bootdisk: virtio0
cores: 2
ide2: none,media=cdrom
memory: 2048
name: test
net0: virtio=22:72:AD:78:FE:60,bridge=vmbr0,tag=44
numa: 1
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=be2813f1-5c27-4e4e-be5f-71ec2ab3e943
sockets: 1
virtio0: virtuals:vm-101-disk-1,size=80G

Watching 'journalctl -f' while starting the virtual:
Code:
Jul 30 21:57:26 kvm5a pvedaemon[5942]: start VM 101: UPID:kvm5a:00001736:0000D3EA:579D06A6:qmstart:101:davidh@pam:
Jul 30 21:57:26 kvm5a kernel: Key type ceph registered
Jul 30 21:57:26 kvm5a kernel: libceph: loaded (mon/osd proto 15/24)
Jul 30 21:57:26 kvm5a kernel: rbd: loaded (major 250)
Jul 30 21:57:26 kvm5a kernel: libceph: client54101 fsid a3f1c21f-f883-48e0-9bd2-4f869c72b17d
Jul 30 21:57:26 kvm5a kernel: libceph: mon1 10.254.1.3:6789 session established
Jul 30 21:57:26 kvm5a kernel: rbd: rbd0: added with size 0x1400000000
Jul 30 21:57:26 kvm5a systemd[1]: Failed to reset devices.list on /system.slice: Invalid argument
Jul 30 21:57:26 kvm5a kernel: device tap101i0 entered promiscuous mode
Jul 30 21:57:26 kvm5a kernel: 8021q: 802.1Q VLAN Support v1.8
Jul 30 21:57:26 kvm5a kernel: 8021q: adding VLAN 0 to HW filter on device eth2
Jul 30 21:57:26 kvm5a kernel: 8021q: adding VLAN 0 to HW filter on device eth3
Jul 30 21:57:26 kvm5a kernel: 8021q: adding VLAN 0 to HW filter on device eth0
Jul 30 21:57:26 kvm5a kernel: 8021q: adding VLAN 0 to HW filter on device eth1
Jul 30 21:57:26 kvm5a kernel: 8021q: adding VLAN 0 to HW filter on device bond0
Jul 30 21:57:26 kvm5a kernel: 8021q: adding VLAN 0 to HW filter on device bond1
Jul 30 21:57:26 kvm5a kernel: rename11: renamed from bond0.44
Jul 30 21:57:26 kvm5a systemd-udevd[5954]: renamed network interface bond0.44 to rename11
Jul 30 21:57:27 kvm5a pvedaemon[5942]: start failed: command '/usr/bin/kvm -id 101 -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/101.pid -daemonize -smbios 'type=1,uuid=be2813f1-5c27-4e4e-be5f-71ec2ab3e943' -name test -smp '2,sockets=1,cores=2,maxcpus=2' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000' -vga cirrus -vnc unix:/var/run/qemu-server/101.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 2048 -object 'memory-backend-ram,id=ram-node0,size=2048M' -numa 'node,nodeid=0,cpus=0-1,memdev=ram-node0' -k en-us -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:f8bb29916' -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/dev/rbd/rbd/vm-101-disk-1,if=none,id=drive-virtio0,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=22:72:AD:78:FE:60,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'' failed: exit code 1
 
I hacked around this by enabling the kernel boot option 'net.ifnames=1' and then making the matches for the physical network interfaces more granular so that I could subsequently create an additional .link file for VLAN interfaces to retain the kernel assigned name:

/etc/default/grub
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet net.ifnames=1"
NB: This required 'update-grub' and a system restart

/etc/systemd/network/10-eth0.link
Code:
[Match]
MACAddress=00:1e:67:fd:06:bc
Driver=ixgbe

[Link]
Name=eth0

/etc/systemd/network/10-vlan.link
Code:
[Match]
Type=vlan

[Link]
NamePolicy=kernel
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!