VM interface names - numbering changed with 7. vnic?

czechsys

Renowned Member
Nov 18, 2015
453
50
93
PVE 7.4

Code:
/var/log# ip l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 0a:44:61:8c:42:c7 brd ff:ff:ff:ff:ff:ff
    altname enp0s18
3: ens19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 0a:12:66:62:02:25 brd ff:ff:ff:ff:ff:ff
    altname enp0s19
4: ens20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 0a:e7:33:a1:dc:82 brd ff:ff:ff:ff:ff:ff
    altname enp0s20
5: ens21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 0a:fe:c6:8b:c6:85 brd ff:ff:ff:ff:ff:ff
    altname enp0s21
6: ens22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 0a:ff:ca:eb:33:70 brd ff:ff:ff:ff:ff:ff
    altname enp0s22
7: ens23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 0a:99:74:84:8a:26 brd ff:ff:ff:ff:ff:ff
    altname enp0s23
8: ens1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 0a:d0:e4:1b:6f:2f brd ff:ff:ff:ff:ff:ff
    altname enp1s1

1750428103437.png

The 7th vnic in VM configuration was added as ens1 and not ens24. Why? It's same in 8.x?
 
Hi,
please share the output of qm config <ID> replacing <ID> with the actual ID of your VM. Did you upgrade the qemu-server or pve-qemu-kvm package recently, see /var/log/apt/history.log? If so, from which version to which? Did you change the VM's hardware configuration, e.g. machine type?

P.S. Note that Proxmox VE 7 is end-of-life since nearly a year now:
https://pve.proxmox.com/wiki/Upgrade_from_7_to_8
 
Code:
agent: 1
boot: cdn
bootdisk: scsi0
cores: 2
cpu: host
ide2: none,media=cdrom
ipconfig1: ip=SOMEIPV4/24,gw=SOMEIPV4
memory: 4096
name: SOMEFQDN
net0: virtio=0A:44:61:8C:42:C7,bridge=vmbr0,tag=VLANID
net1: virtio=0A:12:66:62:02:25,bridge=vmbr0,tag=VLANID
net2: virtio=0A:E7:33:A1:DC:82,bridge=vmbr0,tag=VLANID
net3: virtio=0A:FE:C6:8B:C6:85,bridge=vmbr0,tag=VLANID
net4: virtio=0A:FF:CA:EB:33:70,bridge=vmbr0,tag=VLANID
net5: virtio=0A:99:74:84:8A:26,bridge=vmbr0,tag=VLANID
net6: virtio=0A:D0:E4:1B:6F:2F,bridge=vmbr0,tag=VLANID
numa: 0
onboot: 1
ostype: l26
protection: 1
scsi0: vg-dorado-dev:vm-233-disk-0,discard=on,size=10G
scsihw: virtio-scsi-pci
smbios1: uuid=e594697d-16f0-4e87-ae7f-68503e4a6c02
sockets: 1
tags: SOMETAG

We didn't changed machine type etc, changes are only cpu/ram/nic in VM config.

PVE last update at 4.10.2024 with PVE all cluster hosts reboot. We know currently EOL, but we have update cycles that was postponed. Migration to 8.x is planned.
Code:
pve-manager: 7.4-19 (running version: 7.4-19/f98bf8d4)
pve-kernel-5.15: 7.4-15
pve-kernel-5.15.158-2-pve: 5.15.158-2
pve-kernel-5.15.143-1-pve: 5.15.143-1
pve-kernel-5.15.116-1-pve: 5.15.116-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.3
libpve-apiclient-perl: 3.2-2
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.3.0
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-4
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
openvswitch-switch: 2.15.0+ds1-2+deb11u5
proxmox-backup-client: 2.4.7-1
proxmox-backup-file-restore: 2.4.7-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.7.4
pve-cluster: 7.3-3
pve-container: 4.4-7
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+3
pve-firewall: 4.3-5
pve-firmware: 3.6-6
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.10-1
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-6
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.15-pve1
 
Oh, so there was no change regarding the software at all. Do you just mean you're just surprised that the 7th NIC got a different name inside the VM, not that it changed from something to something else? The reason for that is just the different locations on the PCI buses:
https://git.proxmox.com/?p=qemu-ser...09493db5db6184e172079e3332e087af;hb=HEAD#l170

While the following is written for the host, you could also use it inside the VM if you have systemd: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network_override_device_names
 
Yes, newly added nic and suprised by different name. Looking into that git - well, that's not good info, especially with IaaC, who (human) will remember/count ifaces - it's X number before change, not possible to use cycle etc...Yes, we are using systemd, but using .link will require hard-coded mac for example (+ more complicated scripts) and i am avoiding all such hard-coding where i can.
Such (uncodumented) unexpected minor complications.