Hi Proxmox people!
We are happily using Proxmox VE for almost two years now. Our setup consists of 2 clustered 128-thread machines, that together serve 7 machines (3 jupyterhubs, 3 rstudio-servers, 1 ldap server) to our users. The cluster is connected to a bare Debian-11 nfs-server, which keeps the user-homes and also the vm-backups. The vms themselves and the snapshots live on a RAID1 volume, Proxmox itself on another, separate RAID1. All machines are monitored by yet another dedicated Debian-11 using icinga2 and munin.
Each Proxmox host has 10 NICs (6x 1G for redundant corosync, webaccess and separated uplinks for the vms, 4x 10GB for connections to the nfs, which itself 'waits' with an 4x10G LACP-bond on the switch), which allows for quite reactive behaviour of the machines. IPs are dialed in 'hardcoded', we do not rely on DHCP via MAC-address recognition.
We recently upgraded the nfs and the monitoring server from Debian-10 to Debian-11, which brings up my question. Debian-10's original 4.19.x-kernel initially did not recognize the 10G-NICs, so we had to build the i40-Intel kernel-modules by hand (the same was true for the initial Proxmox-6 installation). When we upgraded the two mentioned servers, pretty much all of the NICs were renamed (but the new kernel recognizes the 10G NICs with its own modules, no upstream needed anymore), which lead to some work regarding the filenames for systemd-networkd which we use for bonding (there's no bonds set up on the proxmox-hosts).
So -- how can Proxmox be expected to handle the NIC-naming and NIC-Bridge-Assignments during/after the upgrade? Can we assume they'll stay the same?
So far we've run the 'pve6to7 --full' -script with all tests passing and 3 skips regarding features we do not use, but there's been no hint at networking anyhow.
Looking forward to your input,
Best,
Daniel
We are happily using Proxmox VE for almost two years now. Our setup consists of 2 clustered 128-thread machines, that together serve 7 machines (3 jupyterhubs, 3 rstudio-servers, 1 ldap server) to our users. The cluster is connected to a bare Debian-11 nfs-server, which keeps the user-homes and also the vm-backups. The vms themselves and the snapshots live on a RAID1 volume, Proxmox itself on another, separate RAID1. All machines are monitored by yet another dedicated Debian-11 using icinga2 and munin.
Each Proxmox host has 10 NICs (6x 1G for redundant corosync, webaccess and separated uplinks for the vms, 4x 10GB for connections to the nfs, which itself 'waits' with an 4x10G LACP-bond on the switch), which allows for quite reactive behaviour of the machines. IPs are dialed in 'hardcoded', we do not rely on DHCP via MAC-address recognition.
We recently upgraded the nfs and the monitoring server from Debian-10 to Debian-11, which brings up my question. Debian-10's original 4.19.x-kernel initially did not recognize the 10G-NICs, so we had to build the i40-Intel kernel-modules by hand (the same was true for the initial Proxmox-6 installation). When we upgraded the two mentioned servers, pretty much all of the NICs were renamed (but the new kernel recognizes the 10G NICs with its own modules, no upstream needed anymore), which lead to some work regarding the filenames for systemd-networkd which we use for bonding (there's no bonds set up on the proxmox-hosts).
So -- how can Proxmox be expected to handle the NIC-naming and NIC-Bridge-Assignments during/after the upgrade? Can we assume they'll stay the same?
So far we've run the 'pve6to7 --full' -script with all tests passing and 3 skips regarding features we do not use, but there's been no hint at networking anyhow.
Looking forward to your input,
Best,
Daniel