VLAN stop working after upgrade proxmox 5 to 6

achirkov

Active Member
Nov 26, 2020
30
1
28
31
Hi, today i upgrade 2 servers with proxmox 5 to 6 version.
After restart server vlan stop working, when i try ifup:
Code:
sudo ifup enp5s0.4000
RTNETLINK answers: File exists
ifup: failed to bring up enp5s0
ifup: could not bring up parent interface enp5s0
Enp5s0 interface working.
My config:
Code:
auto lo
iface lo inet loopback
iface lo inet6 loopback

auto enp5s0
iface enp5s0 inet static
  address ip
  netmask 255.255.255.248
  gateway gateway_ip
  up route add -net my_ip netmask 255.255.255.248 gw gw_ip dev enp5s0
  post-up echo 1 > /proc/sys/net/ipv4/ip_forward
  post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp

auto vmbr0
iface vmbr0 inet static
        address  10.2.1.1
        netmask  255.255.255.0
        bridge-ports none
        bridge-stp off
        bridge-fd 0
auto enp5s0.4000
iface enp5s0.4000 inet static
        address  10.2.0.1
        netmask  255.0.0.0
        mtu 1400
        vlan-raw-device enp5s0
What a problem with vlan?
 
Hi,

Have you rebooted the server after upgrading?

Could you please post the output of the following commands:

Bash:
~ pveversion -v
~ ip a
~ systemctl status networking.service
 
Hi,

Have you rebooted the server after upgrading?

Could you please post the output of the following commands:

Bash:
~ pveversion -v
~ ip a
~ systemctl status networking.service
Yes, server rebooted.
$ pveversion -v
Code:
proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-4.15: 5.4-19
pve-kernel-4.15.18-30-pve: 4.15.18-58
pve-kernel-4.15.18-24-pve: 4.15.18-52
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-2
libpve-guest-common-perl: 3.1-4
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-4
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.6-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-2
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-8
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1
ip a
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 70:85:c2:fd:06:03 brd ff:ff:ff:ff:ff:ff
    inet XX.XX.XX.XX/29 brd XX.XX.XX.XX scope global enp5s0
       valid_lft forever preferred_lft forever
    inet6 address/64 scope link
       valid_lft forever preferred_lft forever
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 22:63:de:3c:4d:a9 brd ff:ff:ff:ff:ff:ff
    inet 10.2.1.1/24 brd 10.2.1.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 address/64 scope link
       valid_lft forever preferred_lft forever
4: veth100i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i0 state UP group default qlen 1000
    link/ether fe:96:b2:4b:6a:22 brd ff:ff:ff:ff:ff:ff link-netnsid 0
5: fwbr100i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 7e:99:4b:38:df:cc brd ff:ff:ff:ff:ff:ff
6: fwpr100p0@fwln100i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 22:63:de:3c:4d:a9 brd ff:ff:ff:ff:ff:ff
7: fwln100i0@fwpr100p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i0 state UP group default qlen 1000
    link/ether 7e:99:4b:38:df:cc brd ff:ff:ff:ff:ff:ff
8: veth101i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:1a:e9:09:b7:cf brd ff:ff:ff:ff:ff:ff link-netnsid 1
9: veth102i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:8f:95:9f:5c:8b brd ff:ff:ff:ff:ff:ff link-netnsid 2
10: veth103i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr103i0 state UP group default qlen 1000
    link/ether fe:74:ab:8d:06:31 brd ff:ff:ff:ff:ff:ff link-netnsid 3
11: fwbr103i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 86:2f:22:52:56:b7 brd ff:ff:ff:ff:ff:ff
12: fwpr103p0@fwln103i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 82:cb:05:57:ff:9e brd ff:ff:ff:ff:ff:ff
13: fwln103i0@fwpr103p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr103i0 state UP group default qlen 1000
    link/ether 86:2f:22:52:56:b7 brd ff:ff:ff:ff:ff:ff
14: tap1001i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 4a:aa:58:f3:fc:68 brd ff:ff:ff:ff:ff:ff
status
Code:
$ systemctl status networking.service
● networking.service - Raise network interfaces
   Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Wed 2021-01-20 08:41:11 CET; 1s ago
     Docs: man:interfaces(5)
  Process: 21498 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=1/FAILURE)
 Main PID: 21498 (code=exited, status=1/FAILURE)
 
New information. I recreate vlan from web interface, restart server.
Config vlan:
Code:
auto enp5s0.2000
iface enp5s0.2000 inet static
        address 10.2.0.1/8
        mtu 1400
Interface worked, not have any error on restart networking service or ifup\ifdown, but not have ping in network to any host.
On server start get strange message:
Code:
IPv6: ADDRCONF(NETDEV_CHANGE): enp5s0.2000: link becomes ready
But why ipv6?