Hi,
I'm not the first here to raise that kind of thread regarding ifupdown2 issues after install of the package. I was reading the other threads and could not find a solution for my issue.
i was creating a new vlan aware linux bridge, stumbled upon the package ifupdown2 requirement, to get rid of reboots for network changes. sounds like a thing which comes handy.
As it was recommended and ships automatically with PVE 7, i installed it.
bad mistake...
And stuck. I cannot ping, ssh or webinterface my PVE Host anymore. The guests with their veth on the vmbr0 Bridge are still available. But the IP of the vmbr0 Bridge seems to be down. The NIC assigned to vmbr0 (bridge_ports eth1) seems to be working.
ifupdown: residual config
ifupdown2: not correctly installed
??
eth0 (pcie card) and vmbr1 are at the moment not in use. driver issues with r8168 since debian bullseye upgrade. This card is anyhow to be replaced soon and is not the issue here.
eth1 is the onboard NIC, connected to vmbr0
ifupdown2 is now fully installed. at least it seems.
As connectivity to my gateway is broken, i cannot step back easily to ifupdown.
I've removed the various veth, fwpr and fwln parts from the ifconfig output below, to focus on the vmbr0 settings.
My LXC Containers connected via vmbr0 are still up and running. I can ssh & ping them, operational and running. Therefore i hesitate to reboot, do not want to make it possibly worse.
i hope i have provided all the settings & configs, pretty basic though. one eth1 connected to vmbr0, though ifupdown2 broke things.
Any idea how to fix the issue?
I'm not the first here to raise that kind of thread regarding ifupdown2 issues after install of the package. I was reading the other threads and could not find a solution for my issue.
i was creating a new vlan aware linux bridge, stumbled upon the package ifupdown2 requirement, to get rid of reboots for network changes. sounds like a thing which comes handy.
As it was recommended and ships automatically with PVE 7, i installed it.
bad mistake...
cave@cave:~$ ssh root@proxmox
Linux proxmox 5.15.83-1-pve #1 SMP PVE 5.15.83-1 (2022-12-15T00:00Z) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sat Jan 21 09:44:16 2023 from 192.168.100.45
root@proxmox:~# apt install ifupdown2
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
apt autoremove suggestions removed
Suggested packages:
ethtool python3-gvgen python3-mako
The following packages will be REMOVED:
ifupdown
The following NEW packages will be installed:
ifupdown2
0 upgraded, 1 newly installed, 1 to remove and 0 not upgraded.
Need to get 237 kB of archives.
After this operation, 1,464 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 ifupdown2 all 3.1.0-1+pmx3 [237 kB]
Fetched 237 kB in 0s (522 kB/s)
dpkg: ifupdown: dependency problems, but removing anyway as you requested:
ifenslave depends on ifupdown.
(Reading database ... 106875 files and directories currently installed.)
Removing ifupdown (0.8.36+pve2) ...
Selecting previously unselected package ifupdown2.
(Reading database ... 106839 files and directories currently installed.)
Preparing to unpack .../ifupdown2_3.1.0-1+pmx3_all.deb ...
Unpacking ifupdown2 (3.1.0-1+pmx3) ...
Setting up ifupdown2 (3.1.0-1+pmx3) ...
Installing new version of config file /etc/default/networking ...
network config changes have been detected for ifupdown2 compatibility.
Saved in /etc/network/interfaces.new for hot-apply or next reboot.
Reloading network config on first install
Progress: [ 71%] [#############################################################################################......................................]
And stuck. I cannot ping, ssh or webinterface my PVE Host anymore. The guests with their veth on the vmbr0 Bridge are still available. But the IP of the vmbr0 Bridge seems to be down. The NIC assigned to vmbr0 (bridge_ports eth1) seems to be working.
root@proxmox:~# pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.83-1-pve)
pve-manager: 7.3-4 (running version: 7.3-4/d69b70d4)
pve-kernel-helper: 7.3-2
pve-kernel-5.15: 7.3-1
pve-kernel-5.4: 6.4-20
pve-kernel-5.0: 6.0-11
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.0.21-5-pve: 5.0.21-10
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: not correctly installed
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.3
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.3-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-1
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.3-1
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.3.2-1
proxmox-backup-file-restore: 2.3.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.0-1
proxmox-widget-toolkit: 3.5.3
pve-cluster: 7.3-2
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.6-2
pve-ha-manager: 3.5.1
pve-i18n: 2.8-1
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.7-pve3
ifupdown: residual config
ifupdown2: not correctly installed
??
root@proxmox:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!
source /etc/network/interfaces.d/*
auto lo
iface lo inet loopback
iface eth1 inet manual
iface eth0 inet manual
allow-hotplug eth1
auto vmbr0
iface vmbr0 inet static
address 192.168.100.199
netmask 255.255.255.0
gateway 192.168.100.1
bridge_ports eth1
bridge_stp off
bridge_fd 0
iface vmbr0 inet6 static
address xxxx:xxxx:xxxx:xxxx::1
netmask 64
gateway xxxx:xxxx:xxxx:xxxx::1
auto vmbr1
iface vmbr1 inet static
address 192.168.100.198
netmask 255.255.255.0
bridge_ports eth0
bridge_stp off
bridge_fd 0
eth0 (pcie card) and vmbr1 are at the moment not in use. driver issues with r8168 since debian bullseye upgrade. This card is anyhow to be replaced soon and is not the issue here.
eth1 is the onboard NIC, connected to vmbr0
root@proxmox:~# apt list --installed | grep ifup
ifupdown2/stable,now 3.1.0-1+pmx3 all [installed]
ifupdown2 is now fully installed. at least it seems.
As connectivity to my gateway is broken, i cannot step back easily to ifupdown.
I've removed the various veth, fwpr and fwln parts from the ifconfig output below, to focus on the vmbr0 settings.
root@proxmox:~# ifconfig
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether e8:39:35:ea:43:ba txqueuelen 1000 (Ethernet)
RX packets 1199734 bytes 477009442 (454.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2824510 bytes 585841125 (558.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 18
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 180532 bytes 58090462 (55.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 180532 bytes 58090462 (55.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vmbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.100.199 netmask 255.255.255.0 broadcast 192.168.100.255
inet6 xxxx:xxxx:xxxx:xxxx::1 prefixlen 64 scopeid 0x0<global>
inet6 xxxx::xxxx:xxxx:xxxx:xxxx prefixlen 64 scopeid 0x20<link>
ether e8:39:35:ea:43:ba txqueuelen 1000 (Ethernet)
RX packets 2878940 bytes 261862386 (249.7 MiB)
RX errors 0 dropped 173331 overruns 0 frame 0
TX packets 471180 bytes 336694224 (321.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vmbr1: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.100.198 netmask 255.255.255.0 broadcast 192.168.100.255
inet6 xxxx::xxxx:xxxx:xxxx:xxxx prefixlen 64 scopeid 0x20<link>
ether 22:22:f8:03:4b:bd txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7 bytes 826 (826.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
My LXC Containers connected via vmbr0 are still up and running. I can ssh & ping them, operational and running. Therefore i hesitate to reboot, do not want to make it possibly worse.
i hope i have provided all the settings & configs, pretty basic though. one eth1 connected to vmbr0, though ifupdown2 broke things.
Any idea how to fix the issue?
Last edited: