Can those with network issues after upgrade (e.g., through network device renames) please post some more details about their setup, at least:
That would be great!
- NIC models and drivers and PCI address (e.g., with
lspci -k
you get all PCI(e) devices and the used kernel driver, using something likelspci -k | grep -A3 -i ethernet
might spare you from manually searching.- Motherboard or server Vendor + Model and CPU
- the output of
ip link
pveversion -v
Hi,
thanks for your feedback!
Code:
lspci -k | grep -A3 -i ethernet
02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c)
DeviceName: Onboard - RTK Ethernet
Subsystem: Fujitsu Technology Solutions RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
Kernel driver in use: r8169
Kernel modules: r8169
Server / Vendor are Fujitsu Futro S740
Code:
ip link (after doing my above mentioned workaround)
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether 4c:52:62:ba:ed:9c brd ff:ff:ff:ff:ff:ff
altname enp2s0
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 52:0a:68:4d:d6:ad brd ff:ff:ff:ff:ff:ff
4: veth104i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether fe:18:b9:30:3a:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
5: veth103i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether 9a:a6:bc:69:d0:89 brd ff:ff:ff:ff:ff:ff link-netnsid 1
6: veth102i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether fe:86:a2:58:f4:d0 brd ff:ff:ff:ff:ff:ff link-netnsid 2
7: veth101i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether fe:55:73:e7:d3:a2 brd ff:ff:ff:ff:ff:ff link-netnsid 3
8: veth100i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether 16:2b:ac:40:ef:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 4
9: veth106i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether fe:a2:40:ce:94:2e brd ff:ff:ff:ff:ff:ff link-netnsid 5
Code:
pveversion -v
proxmox-ve: 8.0.1 (running kernel: 6.2.16-1-pve)
pve-manager: 8.0.0~8 (running version: 8.0.0~8/61cf3e3d9af500c2)
pve-kernel-6.2: 8.0.0
pve-kernel-6.2.16-1-pve: 6.2.16-1
pve-kernel-6.2.11-2-pve: 6.2.11-2
pve-kernel-6.2.6-1-pve: 6.2.6-1
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx1
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.5
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.0
libpve-access-control: 8.0.1
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.3
libpve-guest-common-perl: 5.0.3
libpve-http-server-perl: 5.0.3
libpve-rs-perl: 0.8.3
libpve-storage-perl: 8.0.0
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 2.99.0-1
proxmox-backup-file-restore: 2.99.0-1
proxmox-kernel-helper: 8.0.0
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.0.4
pve-cluster: 8.0.1
pve-container: 5.0.1
pve-docs: 8.0.1
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.1
pve-firmware: 3.7-1
pve-ha-manager: 4.0.1
pve-i18n: 3.0.2
pve-qemu-kvm: 8.0.2-2
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.3
smartmontools: 7.3-1+b1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.11-pve2
Thanks!!