[SOLVED] Network stops responding when last VM or container shuts down

bigun89

Member
Jan 28, 2022
36
1
13
44
Hypervisor is running the latest version of 7, and was working on upgrading to 8. I shut down all of the VMs to prepare for the upgrade. In doing so, the Proxmox host itself stops responding through the web interface, stops responding via SSH, doesn't reply to pings, and the console itself stopped responding to keystrokes - it locks completely up.

This host only runs 1 container, 1 Windows VM, and an occasional GNS3 simulation VM that stays powered off most times. As soon as I shut down the Windows VM, the entire host locks up - then I can't proceed with the upgrade.

Any help is appreciated.


*edit*

Just discovered this, doesn't seem to matter that it's the Windows VM. I shut down the Windows VM first - no problems. I then shutdown the container, and the host locked up. It's when the last VM or container is shutdown.

*edit*

So, disregard the terminal not responding, the repeatable issue is that without a VM running, the hypervisor doesn't have network connectivity. I even went so far as to disabling all VMs and containers from startup, and the network doesn't function. I start the container from the CLI, and it starts responding.

The upgrade is finished with this revelation. So now it's running version 8.2.4 with the same issue.
 
Last edited:
When all VM's/Containers are shut down, what's the output of "ip a" and have you tested if you can still ping/traceroute outside (and it's not just incoming traffic being blocked?), and if so, if WHILE the ping is running, if you can now ping the host as well? (maybe something like a sleep-mode if no traffic?)

Also in general, what's the output of "cat /etc/network/interfaces"
 
So, while I was remote, I tried pinging the server as I shut down the VMs and containers, and it stopped responding. Also, from the host, I could not ping our firewall's gateway IP while they were shut down, it certainly couldn't get to the internet. So no pings out or in, either way.

I will have to run "IP a" and get you the contents of /etc/network/interfaces when I get back to the office tomorrow.
 
Last edited:
Got it, the 172.22.118.88 address is the one that goes down:

Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether b8:2a:72:e0:35:56 brd ff:ff:ff:ff:ff:ff
    altname enp1s0f0
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether b8:2a:72:e0:35:57 brd ff:ff:ff:ff:ff:ff
    altname enp1s0f1
4: eno3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether b8:2a:72:e0:35:58 brd ff:ff:ff:ff:ff:ff
    altname enp2s0f0
5: enp5s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 00:0e:1e:51:0a:40 brd ff:ff:ff:ff:ff:ff
6: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether b8:2a:72:e0:35:59 brd ff:ff:ff:ff:ff:ff
    altname enp2s0f1
7: enp5s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP group default qlen 1000
    link/ether 00:0e:1e:51:0a:42 brd ff:ff:ff:ff:ff:ff
8: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:0e:1e:51:0a:40 brd ff:ff:ff:ff:ff:ff
    inet 172.22.118.88/23 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::20e:1eff:fe51:a40/64 scope link
       valid_lft forever preferred_lft forever
9: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:0e:1e:51:0a:42 brd ff:ff:ff:ff:ff:ff
    inet 10.69.118.2/24 scope global vmbr1
       valid_lft forever preferred_lft forever
    inet6 fe80::20e:1eff:fe51:a42/64 scope link
       valid_lft forever preferred_lft forever

Code:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

iface eno3 inet manual

iface eno4 inet manual

iface enp4s0 inet manual

iface enp5s0 inet manual

iface enp5s0f0 inet manual

iface enp5s0f1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 172.22.118.88/23
        gateway 172.22.118.1
        bridge-ports enp5s0f0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 118

auto vmbr1
iface vmbr1 inet static
        address 10.69.118.2/24
        bridge-ports enp5s0f1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 469
 
Also found this in the logs:


Code:
Jul 18 14:44:16 proxmox1 kernel: vmbr0: port 2(fwpr501p0) entered blocking state
Jul 18 14:44:16 proxmox1 kernel: vmbr0: port 2(fwpr501p0) entered disabled state
Jul 18 14:44:16 proxmox1 kernel: fwpr501p0: entered allmulticast mode
Jul 18 14:44:16 proxmox1 kernel: fwpr501p0: entered promiscuous mode
Jul 18 14:44:16 proxmox1 kernel: bnx2x 0000:05:00.0 enp5s0f0: entered promiscuous mode
Jul 18 14:44:16 proxmox1 kernel: vmbr0: port 2(fwpr501p0) entered blocking state
Jul 18 14:44:16 proxmox1 kernel: vmbr0: port 2(fwpr501p0) entered forwarding state
Jul 18 14:44:17 proxmox1 kernel: fwbr501i0: port 1(fwln501i0) entered blocking state
Jul 18 14:44:17 proxmox1 kernel: fwbr501i0: port 1(fwln501i0) entered disabled state
Jul 18 14:44:17 proxmox1 kernel: fwln501i0: entered allmulticast mode
Jul 18 14:44:17 proxmox1 kernel: fwln501i0: entered promiscuous mode
Jul 18 14:44:17 proxmox1 kernel: fwbr501i0: port 1(fwln501i0) entered blocking state
Jul 18 14:44:17 proxmox1 kernel: fwbr501i0: port 1(fwln501i0) entered forwarding state
Jul 18 14:44:17 proxmox1 kernel: fwbr501i0: port 2(veth501i0) entered blocking state
Jul 18 14:44:17 proxmox1 kernel: fwbr501i0: port 2(veth501i0) entered disabled state
Jul 18 14:44:17 proxmox1 kernel: veth501i0: entered allmulticast mode
Jul 18 14:44:17 proxmox1 kernel: veth501i0: entered promiscuous mode
Jul 18 14:44:17 proxmox1 kernel: eth0: renamed from vethqykmZH
 
Last edited:
So with all VM's off, a ping from proxmox to 172.22.118.1 stops working and to 10.69.118.1 (or whatever device is on that network) would remain working?
That I'm not sure of, it's a network we use to clone machines from. But the 172.22.118.0/23 stops working for sure.
 
So with all VM's off, a ping from proxmox to 172.22.118.1 stops working and to 10.69.118.1 (or whatever device is on that network) would remain working?
So, I poked around a little, and it seems when an interface no longer has any VM's using it, Proxmox shuts that particular interface down.

For instance: 10.69.118.2 interface on Proxmox is used by our Windows VM exclusively. When that Windows VM is shut down, that interface stops working. Keep in mind the Windows VM has an entirely different IP, 10.69.118.1.

I would expect 10.69.118.1 to stop working when the VM is powered off, but not 10.69.118.2.
 
So, I poked around a little, and it seems when an interface no longer has any VM's using it, Proxmox shuts that particular interface down.

This should not happen, is your ip address output from above when the network is working? If not, can you post the output of ip address after turning off all VMs, as well as a full output from journalctl from before you turn off the VM until after you turn off the VM and the network stops working. Can you also post the output of ip route ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!