Proxmox 6.0 renames network card and can not use it

masgo

New Member
Jun 24, 2019
21
1
3
70
I just installed a clean proxmox 6.0 on a machine and the network card does no longer function properly. It worked with Proxmox 5.4

Hardware: HP server with 2x gigabit on-board card + HP NC364T PCIe (4x gigabit card, with 2x Intel 82571EB chips).

The problem is, that 2 of the 4 ports get renamed on boot AND they no longer function in GUI. "Network Type" is shown as "Unkonwn" for these ports. In Proxmox 5.4. these ports showed up as ens0f0 and ens0f1

What can I do about it?

Code:
# dmesg | grep  -i eth
[    1.465004] tg3 0000:03:00.0 eth0: Tigon3 [partno(N/A) rev 5720000] (PCI Express) MAC address 00:fd:45:fc:3e:7c
[    1.465007] tg3 0000:03:00.0 eth0: attached PHY is 5720C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1])
[    1.465009] tg3 0000:03:00.0 eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1]
[    1.465011] tg3 0000:03:00.0 eth0: dma_rwctrl[00000001] dma_mask[64-bit]
[    1.483040] tg3 0000:03:00.1 eth1: Tigon3 [partno(N/A) rev 5720000] (PCI Express) MAC address 00:fd:45:fc:3e:7d
[    1.483044] tg3 0000:03:00.1 eth1: attached PHY is 5720C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1])
[    1.483046] tg3 0000:03:00.1 eth1: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1]
[    1.483047] tg3 0000:03:00.1 eth1: dma_rwctrl[00000001] dma_mask[64-bit]
[    1.522370] tg3 0000:03:00.0 eno1: renamed from eth0
[    1.531903] tg3 0000:03:00.1 eno2: renamed from eth1
[    1.603848] e1000e 0000:09:00.0 eth0: (PCI Express:2.5GT/s:Width x4) 00:1c:c4:47:0e:b1
[    1.603851] e1000e 0000:09:00.0 eth0: Intel(R) PRO/1000 Network Connection
[    1.603931] e1000e 0000:09:00.0 eth0: MAC: 0, PHY: 4, PBA No: D90972-004
[    1.767870] e1000e 0000:09:00.1 eth1: (PCI Express:2.5GT/s:Width x4) 00:1c:c4:47:0e:b0
[    1.767873] e1000e 0000:09:00.1 eth1: Intel(R) PRO/1000 Network Connection
[    1.767966] e1000e 0000:09:00.1 eth1: MAC: 0, PHY: 4, PBA No: D90972-004
[    1.931836] e1000e 0000:0a:00.0 eth2: (PCI Express:2.5GT/s:Width x4) 00:1c:c4:47:0e:b3
[    1.931839] e1000e 0000:0a:00.0 eth2: Intel(R) PRO/1000 Network Connection
[    1.931919] e1000e 0000:0a:00.0 eth2: MAC: 0, PHY: 4, PBA No: D90972-004
[    2.091845] e1000e 0000:0a:00.1 eth3: (PCI Express:2.5GT/s:Width x4) 00:1c:c4:47:0e:b2
[    2.091847] e1000e 0000:0a:00.1 eth3: Intel(R) PRO/1000 Network Connection
[    2.091927] e1000e 0000:0a:00.1 eth3: MAC: 0, PHY: 4, PBA No: D90972-004
[    2.093805] e1000e 0000:0a:00.0 ens1f0: renamed from eth2
[    2.131583] e1000e 0000:0a:00.1 ens1f1: renamed from eth3
[    2.155876] e1000e 0000:09:00.1 rename5: renamed from eth1
[    2.175893] e1000e 0000:09:00.0 rename4: renamed from eth0
Code:
# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 00:fd:45:fc:3e:7c brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:fd:45:fc:3e:7d brd ff:ff:ff:ff:ff:ff
4: rename4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:1c:c4:47:0e:b1 brd ff:ff:ff:ff:ff:ff
5: rename5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:1c:c4:47:0e:b0 brd ff:ff:ff:ff:ff:ff
6: ens1f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:1c:c4:47:0e:b3 brd ff:ff:ff:ff:ff:ff
7: ens1f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:1c:c4:47:0e:b2 brd ff:ff:ff:ff:ff:ff
 

Stefan_R

Proxmox Staff Member
Staff member
Jun 4, 2019
462
80
28
Vienna
Take a look in your '/etc/network/interfaces' file and change the old adapter names to the new ones (or post the file here, so I can see your configuration).
 

masgo

New Member
Jun 24, 2019
21
1
3
70
Take a look in your '/etc/network/interfaces' file and change the old adapter names to the new ones (or post the file here, so I can see your configuration).
The problem is/was not in the interfaces file. The interfaces there had the exact same names. Since this was a fresh install (I had proxmox 5.4 on the same hardware, but removed the disk where it was installed and did a fresh install).

After a lot of googling I found the problem to be more of a debian than proxmox problem. (Maybe still proxmox because it is related to the kernel).

It boils down to the predictable interface names, which for some reason are renamed to "rename4". Finding help was a pain since Debian and Ubuntu have different default settings and officially suported methods for handling interface names. Other distributions are even more different.

In the end I am using
Code:
net.ifnames=0
as a GRUB parameter which leads to the interfaces being called eth0 to 5.

Using
Code:
net.ifnames=1
, as suggested in other threads here, did not change anything.
The
Code:
/lib/udev/rules.d/75-persistent-net-generator.rules
method of renaming the interfaces is now deprecated in debian. (see
Code:
zless /usr/share/doc/udev/README.Debian.gz
)
 
Sep 8, 2019
9
6
3
60
It looks like there's a variety of ways to go about solving the "consistent" name issue. The proper way looks like it is in that Debian README, excerpt here:

Migration to the current network interface naming scheme
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Interface names must be be manually migrated to the new naming scheme before
upgrading to Debian 10 / Ubuntu 18.04 LTS. If you rely on the old names in
custom ifupdown stanzas, firewall scripts, or other networking configuration,
these will eventually need to be updated to the new names.

WARNING: This process may render your machine inaccessible through ssh. Be sure
to have physical or serial console access to the machine or a way to revert to
your existing configuration.

First, determine all relevant network interface names: those in
/etc/udev/rules.d/70-persistent-net.rules, or if that does not exist (in
the case of virtual machines), in "ip link" or /sys/class/net/.

Then for every interface name use a command like

grep -r eth0 /etc

to find out where it is being used.

Then on "real hardware" machines, rename the file to
70-persistent-net.rules.old; alternately, if you have multiple interfaces,
instead of renaming you may wish to comment out specific lines to convert a
single interface at a time.

On VMs remove the files /etc/systemd/network/99-default.link and
/etc/systemd/network/50-virtio-kernel-names.link (the latter only exists on VMs
that use virtio network devices).

Rebuild the initrd with

update-initramfs -u

and reboot. Then your system should have a new network interface name (or
names). Adjust configuration files as discovered with the grep above, and test
your system.

Repeat for each network interface name, as necessary.
 
Sep 8, 2019
9
6
3
60
I have four interfaces on my machine, all Intel 82571EB. Two of the interfaces came up as ens2f0 and ens2f1 and I was able to configure them via the Proxmox web GUI. The other two interfaces didn't appear in the GUI. They could be seen via `ls /sys/class/net/` and/or `ip link` as "rename3" and "rename4". To fix this I created this file:

/etc/systemd/network/10-eth2.link

With these contents:

[Match]
MACAddress=00:12:34:56:78:9A

[Link]
Name=eth2

and rebooted. It was then visible in the Proxmox GUI.

I should note this is a Proxmox install added to Debian Buster install, so may vary a bit.
 

masgo

New Member
Jun 24, 2019
21
1
3
70
I have four interfaces on my machine, all Intel 82571EB. Two of the interfaces came up as ens2f0 and ens2f1 and I was able to configure them via the Proxmox web GUI. The other two interfaces didn't appear in the GUI. They could be seen via `ls /sys/class/net/` and/or `ip link` as "rename3" and "rename4". To fix this I created this file:
Seems like the exact same problem as I had, including that you have the same Intel Chip. Since I did a clean Proxmox 6 install and you used Debian, I am guessing that the error is unrelated to proxmox. Still, the question remains why Debian does not name the interfaces properly but falls back to the "renameX" names which makes the interfaces unusable.
 
  • Like
Reactions: jebbam

Hartwin

New Member
May 4, 2018
10
4
3
47
Seems like the exact same problem as I had, including that you have the same Intel Chip. Since I did a clean Proxmox 6 install and you used Debian, I am guessing that the error is unrelated to proxmox. Still, the question remains why Debian does not name the interfaces properly but falls back to the "renameX" names which makes the interfaces unusable.
Debian can't imagine, there are more than one MAC per Port ... these Intels do exactly this. It seems like Intel simply put two network adaptors on one PCI-X Card, each of them got two network ports. The following is the view to such a device from ILO4. Formerly all the port had a name: enp41s0f0, enp41s0f1, enp42s0f0, enp42s0f1. Now there are two renamed: ens8f0, ens8f1 and two 'rename8', 'rename9'. Looks like formely they used the path not the slot.

Adapter 3 - HP NC364T PCIe Quad Port Gigabit Server Adapter
LocationSlot 8
Firmware5.12-2
StatusOK

PortMAC AddressIPv4 AddressIPv6 AddressStatusTeam/Bridge
Network Ports
100:24:81:80:9c:2dN/A2001:16b8:4860:9700:224:81ff:fe80:9c2d/64 fe80::224:81ff:fe80:9OKN/A
100:24:81:80:9c:2fN/AUnknownN/A
200:24:81:80:9c:2cN/A2001:16b8:4860:9700:224:81ff:fe80:9c2c/64 fe80::224:81ff:fe80:9OKN/A
200:24:81:80:9c:2eN/AUnknownN/A

Since I crafted a little with the ifup-scripts (knowing, the naming of the network ports will never ever change as they promised), I will do the /etc/systemd/network/....lnk solution with the old names. Many thanks to jebbam.

[Edit]
Because I wanted to know a little bit more than some lines of code to adopt ... The default naming scheme changed a bit.
In /lib/systemd/network/99-default.link they have an ordered list of NamePolicy. First match wins.
Before they wrote: keep kernel database onboard path slot
Now they write: keep kernel database onboard slot path
So the 4 onboard NICs keep the same like before. But not the Intel NICs. There sit two of them in one slot of the NC364T Server Adapter. These two would get the same name, so one of them gets a dummy name. Refer to https://manpages.debian.org/testing/udev/systemd.link.5.en.html for an understanding of the solution by jebbam.

I'm a little bit upset about such a fail. This is definitely a Problem of Debian "Buster" because they changed the default behavior, breaking the config of big folks (these Intel based NICs are widely spread).
Really? Years of testing did not surface this problem?
 
Last edited:
  • Like
Reactions: spirit

spirit

Famous Member
Apr 2, 2010
4,050
245
83
www.odiso.com
Debian can't imagine, there are more than one MAC per Port ... these Intels do exactly this. It seems like Intel simply put two network adaptors on one PCI-X Card, each of them got two network ports. The following is the view to such a device from ILO4. Formerly all the port had a name: enp41s0f0, enp41s0f1, enp42s0f0, enp42s0f1. Now there are two renamed: ens8f0, ens8f1 and two 'rename8', 'rename9'. Looks like formely they used the path not the slot.

Adapter 3 - HP NC364T PCIe Quad Port Gigabit Server Adapter
LocationSlot 8
Firmware5.12-2
StatusOK

Network Ports
PortMAC AddressIPv4 AddressIPv6 AddressStatusTeam/Bridge
100:24:81:80:9c:2dN/A2001:16b8:4860:9700:224:81ff:fe80:9c2d/64 fe80::224:81ff:fe80:9OKN/A
100:24:81:80:9c:2fN/AUnknownN/A
200:24:81:80:9c:2cN/A2001:16b8:4860:9700:224:81ff:fe80:9c2c/64 fe80::224:81ff:fe80:9OKN/A
200:24:81:80:9c:2eN/AUnknownN/A

Since I crafted a little with the ifup-scripts (knowing, the naming of the network ports will never ever change as they promised), I will do the /etc/systemd/network/....lnk solution with the old names. Many thanks to jebbam.

[Edit]
Because I wanted to know a little bit more than some lines of code to adopt ... The default naming scheme changed a bit.
In /lib/systemd/network/99-default.link they have an ordered list of NamePolicy. First match wins.
Before they wrote: keep kernel database onboard path slot
Now they write: keep kernel database onboard slot path
So the 4 onboard NICs keep the same like before. But not the Intel NICs. There sit two of them in one slot of the NC364T Server Adapter. These two would get the same name, so one of them gets a dummy name. Refer to https://manpages.debian.org/testing/udev/systemd.link.5.en.html for an understanding of the solution by jebbam.

I'm a little bit upset about such a fail. This is definitely a Problem of Debian "Buster" because they changed the default behavior, breaking the config of big folks (these Intel based NICs are widely spread).
Really? Years of testing did not surface this problem?
Thanks for this deep analysis.

I'll try to think why debian have changed order in /lib/systemd/network/99-default.link ?

funny, in debian stretch, it was
NamePolicy=kernel database onboard slot path

I have upgraded proxmox5->6, it's still
NamePolicy=keep kernel database onboard slot path

but on fresh proxmox6/buster
NamePolicy=kernel database onboard path slot

I'll look with proxmox team if it couldn't be reversed by a proxmox package.
 

Hartwin

New Member
May 4, 2018
10
4
3
47
Digging deeper, collecting knowledge, getting upset ...

It seems there is a combination of changes in Debian (namely udev) and Systemd and maybe the PVE custom kernel. All this is only visible if there are notwork cards like the Intels installed.

Your list of changes in 99-default.link gives a hidden hint

funny, in debian stretch, it was
NamePolicy=kernel database onboard slot path
Systemd first asks the kernel if there are persitent names for the network interfaces. Because in late stretch, there are rules (
/etc/udev/rules.d/70-persistent-net.rules or at least /lib/udev/rules.d/75-persistent-net-generator.rules), the kernel will answer with a proper name. These udev rules seems to use the ID_NET_NAME_PATH or ID_NET_MANE_ONBOARD as shown by
udevadm test-builtin net_id /sys/class/net/somethinguseful
Because systemd got a name, it stops renaming at this point. All the following rules will never apply.

I have upgraded proxmox5->6, it's still
NamePolicy=keep kernel database onboard slot path
The upgrade from 5->6 brings buster and a newer systemd. My Proxmox 6 shows --version of systemd = 241.
https://github.com/systemd/systemd/commits/master/network/99-default.link
They introduced "keep" to not overwrite names, previousely set by kernel, udev or something else.
But udev network naming rules are deprecated since years. It's OK to use them, but distributions are allowed to wipe off the rules files. Proxmox did this. Nither of the net rules exist at my upgraded machine.
So from now, systemd is the only master of naming for NICs. The kernel will not answer if asked for net names. The 'database' is not used because it contains only generic info like "ID_OUI_FROM_DATABASE=Hewlett Packard". The name of onboard networks will be the same as before because udev is asked for it.
Code:
root@pve:/sys/class/net# udevadm test-builtin net_id /sys/class/net/eno1
...
Using default interface naming scheme 'v240'.
ID_NET_NAMING_SCHEME=v240
ID_NET_NAME_MAC=enx9c8e994fac0e
ID_OUI_FROM_DATABASE=Hewlett Packard
ID_NET_NAME_ONBOARD=eno1
ID_NET_LABEL_ONBOARD=enNIC Port 1
ID_NET_NAME_PATH=enp2s0f0
But now, it becomes interesting. To name network at PCI-X Cards, systemd will first use a slot based name. It will siply use ID_NET_NAME_SLOT from the udev answer:
Code:
root@pve:/sys/class# udevadm test-builtin net_id /sys/class/net/enp42s0f1
...
Using default interface naming scheme 'v240'.
ID_NET_NAMING_SCHEME=v240
ID_NET_NAME_MAC=enx002481809c2e
ID_OUI_FROM_DATABASE=Hewlett Packard
ID_NET_NAME_PATH=enp42s0f1
ID_NET_NAME_SLOT=ens8f1
Problem here: there are two MAC-Adresses at the same slot. The second will be "rename3" or the like :
Code:
root@pve:/sys/class# udevadm test-builtin net_id /sys/class/net/enp41s0f1
...
Using default interface naming scheme 'v240'.
ID_NET_NAMING_SCHEME=v240
ID_NET_NAME_MAC=enx002481809c2c
ID_OUI_FROM_DATABASE=Hewlett Packard
ID_NET_NAME_PATH=enp41s0f1
ID_NET_NAME_SLOT=ens8f1
Systemd does not thread this as a fail. So it stops renaming here. Because it is fun to blame systemd, I would name it a bug. Something that claims to be the real workhorse of my system should be able to detect this as a problem.

but on fresh proxmox6/buster
NamePolicy=kernel database onboard path slot

I'll look with proxmox team if it couldn't be reversed by a proxmox package.
It seems like someone at proxmox was aware of this problem. They changed the order of "slot" and "path", hiding this systemd fail. Unfortunately, they can't change it for inplace upgrades because all the debian stuff (including systemd) comes from repositories out of their reach. Maybe the next update of systemd will revert this workaround, implementing the 99-default from systemd. This will also have a "keep" to not override the manual settings.
So if you ask something to proxmox team, please ask them to create a proper /etc/systemd/network/10-proxmox.link instead. Because this link-file will just replace the original one, they can use
Code:
[Match]
OriginalName=*
I will try this at my machine because the pinning to a specific MAC will break something after a replacement of the network adapter.
One URL for the closing (explains a lot):
https://wiki.debian.org/NetworkInterfaceNames

Thanks
Hartwin
 
Last edited:
  • Like
Reactions: oz1cw7yymn

spirit

Famous Member
Apr 2, 2010
4,050
245
83
www.odiso.com
Hi,
I'm not sure it's a debian problem, I think something as changed in systemd between versions.

I have looked at last stretch && buster "udev" package (it's providing the file), both have
"NamePolicy=keep kernel database onboard slot path"
 

spirit

Famous Member
Apr 2, 2010
4,050
245
83
www.odiso.com
>>funny, in debian stretch, it was
>>NamePolicy=kernel database onboard slot path
>>
>>I have upgraded proxmox5->6, it's still
>>NamePolicy=keep kernel database onboard slot path
>>
>>but on fresh proxmox6/buster
>>NamePolicy=kernel database onboard path slot

Damn, sorry, I don't known what I said, but on

stretch/ freshproxmox5, I have
NamePolicy=kernel database onboard slot path

buster/fresh or upgraded proxmox6 I have
NamePolicy=keep kernel database onboard slot path

(so the file is overwrite on upgrade).
I have look at systemd history, it always has been "slot path"

I'll try to look at debian udev package history
 

spirit

Famous Member
Apr 2, 2010
4,050
245
83
www.odiso.com
also, I found a maybe related systemd bug report here:

https://github.com/systemd/systemd/issues/12261


That seem really not easy, it seem that kernel driver changes could also change naming of the nic...

also, it seem possible to override policy for each driver like

Code:
# /etc/systemd/network/10-e1000e-quirks.link
[Match]
Driver=e1000e

[Link]
NamePolicy=path
or by mac address, like it's was working fine with good udev persistent name

/etc/systemd/network/10-ether0.link
Code:
[Match]
MACAddress=12:34:45:78:90:AB

[Link]
Name=ether0
I wonder if this last one couldn't be the easier setup from everybody.
(maybe something in proxmox gui to setup this kind of persistent naming link)
 

Hartwin

New Member
May 4, 2018
10
4
3
47
Can you please look for /etc/udev/rules.d/70-persistent-net.rules in stretch? As far as I understand, this file does the same like the .link-files of systemd but reporting the resulting names to systemd as kernel - names.
Proxmox 6 (Buster) have no such a file.

The rules at /lib/udev/rules.d seems only be there to allow systemd's link-files evaluated.
 

Hartwin

New Member
May 4, 2018
10
4
3
47
also, I found a maybe related systemd bug report here:

https://github.com/systemd/systemd/issues/12261


That seem really not easy, it seem that kernel driver changes could also change naming of the nic...
... and to add another device to one of the PCI-X Slots, it's possible, the numbering changes - breaking 'persistant' naming again.

also, it seem possible to override policy for each driver like

Code:
# /etc/systemd/network/10-e1000e-quirks.link
[Match]
Driver=e1000e

[Link]
NamePolicy=path
Replacing an Intel with something else (Maybe Broadcom) would fall back to the "slot path", breaking 'persistant' naming. The idea is, to have names for network interfaces that do not change if the hardware have to be replaced. There are to many config settings where they are used.

or by mac address, like it's was working fine with good udev persistent name
As far as I know, it's poossible to setup a proxmox node within a virtualized environment (nesting). Maybe the MAC is changing from time to time. The scenario with replacement of network adaptors is the same like Driver=... but even more bad.

I wonder if this last one couldn't be the easier setup from everybody.
(maybe something in proxmox gui to setup this kind of persistent naming link)
Maybe easier but not better. It would do a different renaming even if the adapter is replaced by an identical one. This was the solution of jebbam in https://forum.proxmox.com/threads/p...ork-card-and-can-not-use-it.56690/post-266253 ... a fast hotfix to have some time for a permanent fix.

In my opinion this would as mentioned:
/etc/systemd/network/10-naming-fix.link
Code:
[Match]

OriginalName=*

[Link]

NamePolicy=keep kernel database onboard path slot
MACAddressPolicy=persistent
 

Fata1W0und

New Member
Jan 21, 2020
1
0
1
40
I don’t think this is limited to Intel 4-port NICs. I have a Cisco Broadcom 5709 4-port with the same issue. I have tried everything I can find to get all the ports working. Drivers, firmware, etc.
 

DerGärtner

New Member
Sep 23, 2018
5
0
1
Hi, and thanks for this thread!
After a fresh installation of proxmox 6.1-5 I also have troubles with my 4-port intel network card. (2 ports working fine ens1fX and 2 ports are not working renameX)

As jebbam mentioned I created four .link files in /etc/systemd/network like 10-ensXfX.link and so on.

if I type ip link in console:
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

2: enp11s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP mode DEFAULT group default qlen 1000

    link/ether 00:00:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff

3: enp11s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP mode DEFAULT group default qlen 1000

    link/ether 00:00:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff

4: ens1f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr3 state UP mode DEFAULT group default qlen 1000

    link/ether 00:00:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff

5: ens1f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UP mode DEFAULT group default qlen 1000

    link/ether 00:00:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff

6: ens2f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr5 state UP mode DEFAULT group default qlen 1000

    link/ether 00:00:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff

7: ens2f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr4 state UP mode DEFAULT group default qlen 1000

    link/ether 00:00:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff

8: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master vmbr7 state UP mode DEFAULT group default qlen 1000

    link/ether 00:00:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff

9: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master vmbr6 state UP mode DEFAULT group default qlen 1000

    link/ether 00:00:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff

10: enp0s29f0u2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000

    link/ether 00:00:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff

11: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000

    link/ether 00:00:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff

12: vmbr7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP mode DEFAULT group default qlen 1000

    link/ether 00:00:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff

13: vmbr6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP mode DEFAULT group default qlen 1000

    link/ether 00:00:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff

14: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000

    link/ether 00:00:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff

15: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000

    link/ether 00:00:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff

16: vmbr3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000

    link/ether 00:00:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff

17: vmbr4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000

    link/ether 00:00:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff

18: vmbr5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000

    link/ether 00:00:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff

19: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN mode DEFAULT group default qlen 1000

    link/ether 00:00:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff

??????    21: vmbr2v20: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000

    link/ether 00:00:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff

??????    22: rename22@ens1f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000

    link/ether 00:00:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff

But now if I start a VM a new error appears:
Code:
Error: 8021q: VLAN device already exists.
can't add vlan tag 20 to interface ens1f0
kvm: network script /var/lib/qemu-server/pve-bridge failed with status 512
TASK ERROR: start failed: QEMU exited with code 1
another VM:
Code:
Cannot find device "ens1f1.30"
can't activate interface 'ens1f1.30'
kvm: network script /var/lib/qemu-server/pve-bridge failed with status 6400
TASK ERROR: start failed: QEMU exited with code 1

What's wrong with my setup? Can someone please help!

pveversion
Code:
proxmox-ve: 6.1-2 (running kernel: 5.3.13-2-pve)
pve-manager: 6.1-5 (running version: 6.1-5/9bf06119)
pve-kernel-5.3: 6.1-2
pve-kernel-helper: 6.1-2
pve-kernel-5.3.13-2-pve: 5.3.13-2
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-10
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-2
pve-cluster: 6.1-3
pve-container: 3.0-18
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-4
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
 

nosuch

New Member
Jul 5, 2019
15
1
3
41
Not much to add but I have the same issue here with an Intel quad port NIC. The same NIC in a system that was upgraded from PVE 5 to 6 still has the proper names, but on a fresh install of 6 i have two ports named rename3 and rename6. A bit of a noob so I'm probably missing something, but did any of the steps in this thread at least mitigate the issue for now? Would like to be able to use those 2 ports even if it requires a workaround.
 
  • Like
Reactions: sandman

oz1cw7yymn

Member
Feb 13, 2019
35
3
8
I just changed /usr/lib/systemd/network/99-default.link to NamePolicy=path, seemed like only way to get consistent and persistent nic names. The currently implemented naming policy scheme is clearly incompatible with nic with multiple ports in the same pci slot.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!