NIC suddenly changed name

kenneth_vkd

Well-Known Member
Sep 13, 2017
39
3
48
31
Hi
Today we experienced something strange. During the night, we got this output from the cronjobs on 3 out of 4 nodes:
Code:
/etc/cron.daily/logrotate:
Job for pveproxy.service failed.
See "systemctl status pveproxy.service" and "journalctl -xe" for details.

When we started working in the morning, we discovered that we could not access the webinterface for two of our nodes. We tried to restart pve-cluster, pvedaemon and pveproxy, but nothing helped. Then we tried to reboot the hosts and after reboot, the NIC connected to vmbr1, where all virtual machines are connected and where we have the cluster running, had changed name from eth1 to eno2.

How can this happen when no changes have been done to the hardware and no firmware updates have been applied?

Output of pveversion -V:
Code:
proxmox-ve: 5.2-2 (running kernel: 4.15.18-4-pve)
pve-manager: 5.2-8 (running version: 5.2-8/fdf39912)
pve-kernel-4.15: 5.2-7
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.18-3-pve: 4.15.18-22
pve-kernel-4.15.18-2-pve: 4.15.18-21
pve-kernel-4.15.18-1-pve: 4.15.18-19
pve-kernel-4.15.17-3-pve: 4.15.17-14
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-38
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-10
libpve-storage-perl: 5.0-25
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-1
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-30
pve-container: 2.0-26
pve-docs: 5.2-8
pve-firewall: 3.0-14
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-33
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9
 
Name eno2 is systemd persistent network naming - this is default for a long time now. No idea why you still had eth1 ...
 
Strange.
The three nodes are actually quite recent installs and they were deployed with PVE 5.2.
Maybe it is something with either the network driver or perhaps the installation image provided by OVH
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!