No reboot with 4.4 pve kernel

yakakliker

Renowned Member
May 19, 2011
95
1
73
France
Hello,

I've got 2 Dell R730 with 2 10Gb network cards. I just noticed that after 4.4.x kernel upgrade, the server can't reboot.
I have "A start job is running for LSB: Raise network interfaces. (15min 10s / no limit)" blocking message during the shutdown serveur.
Has someone already experience this problem ?
 
  • Like
Reactions: greenvomit8
what kind of network cards and configuration are you using?
 
The network cards :

Ethernet controller: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10)

Interfaces file :

auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

iface eth2 inet manual

iface eth3 inet manual

iface eth4 inet manual

iface eth5 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.XXX.191
netmask 255.255.255.0
gateway 192.168.XXX.1
bridge_ports eth0
bridge_stp off
bridge_fd 0

auto vmbr1
iface vmbr1 inet manual
bridge_ports eth1
bridge_stp off
bridge_fd 0

auto vmbr2
iface vmbr2 inet manual
bridge_ports eth2
bridge_stp off
bridge_fd 0

auto vmbr3
iface vmbr3 inet static
address 192.168.252.191
netmask 255.255.255.0
bridge_ports eth3
bridge_stp off
bridge_fd 0

auto vmbr4
iface vmbr4 inet static
address 192.168.254.191
netmask 255.255.255.0
bridge_ports eth4
bridge_stp off
bridge_fd 0

auto vmbr5
iface vmbr5 inet static
address 192.168.253.191
netmask 255.255.255.0
bridge_ports eth5
bridge_stp off
bridge_fd 0
 
Re,

After more tests :
Install from Proxmox 4.2 iso + dist-upgrade >> Shutdown OK with 4.4 kernel

Install from Proxmox 4.1 iso + dist-upgrade >> Shutdown NOK with 4.4 Kernel
 
Re, re,

Shutdown NOK after network configuration on Proxmox 4.2 iso install.

The network conf :

iface eth0 inet manual

iface eth1 inet manual

iface eth2 inet manual

iface eth3 inet manual

iface eth4 inet manual

iface eth5 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.150.191
netmask 255.255.255.0
gateway 192.168.150.1
bridge_ports eth0
bridge_stp off
bridge_fd 0

auto vmbr1
iface vmbr1 inet manual
bridge_ports eth1
bridge_stp off
bridge_fd 0

auto vmbr2
iface vmbr2 inet manual
bridge_ports eth2
bridge_stp off
bridge_fd 0

auto vmbr3
iface vmbr3 inet static
address 192.168.252.191
netmask 255.255.255.0
bridge_ports eth3
bridge_stp off
bridge_fd 0

auto vmbr4
iface vmbr4 inet static
address 192.168.254.191
netmask 255.255.255.0
bridge_ports eth4
bridge_stp off
bridge_fd 0

auto vmbr5
iface vmbr5 inet static
address 192.168.153.191
netmask 255.255.255.0
bridge_ports eth5
bridge_stp off
bridge_fd 0
 
ps -Af during shutdown :

root 3285 1 0 11:27 ? 00:00:00 /bin/sh -e /etc/init.d/networking stop
root 3288 3285 0 11:27 ? 00:00:00 ifdown -a --exclude=lo
root 3318 3288 0 11:27 ? 00:00:00 /bin/sh -c run-parts /etc/network/if-post-down.d
root 3319 3318 0 11:27 ? 00:00:00 run-parts /etc/network/if-post-down.d
root 3320 3319 0 11:27 ? 00:00:00 /bin/sh /etc/network/if-post-down.d/bridge
root 3323 3320 99 11:27 ? 00:00:25 /bin/sh /etc/network/if-post-down.d/bridge
 
After more tests, the hangs come from the 10Gb ethernet card.

With Broadcom 10Gb : No shutdown
Without : Shutdown OK
 
We have exactly the same problem .. However, with HP Proliant DL360 Gen9 p

10GB network card is a

HP FlexFabric 10Gb 2 -port 533FLR T Adapter
 

Attachments

  • report-prx2.txt
    32.9 KB · Views: 4
  • report-prx2-modinfo.txt
    1.3 KB · Views: 2
  • report-prx2-modinfo2.txt
    2.2 KB · Views: 2
I have a similar problem when I try to reboot the server, it says " [ *** ] A stop job is running for LSB: raise network interfa...52s / no limit). ".
Still looking for the cause, I do have 10Gbit card in the server and it is broadcom, but I dont think it is the same chipset as the ones you have..

Server is HP DL380 G8
Network card is HP

root@rdc-prox10:~# lspci -vs 07:00.0
07:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10)
Subsystem: Hewlett-Packard Company Ethernet 10Gb 2-port 530T Adapter
Physical Slot: 2
Flags: bus master, fast devsel, latency 0, IRQ 51
Memory at f6000000 (64-bit, prefetchable) [size=8M]
Memory at f5800000 (64-bit, prefetchable) [size=8M]
Memory at f57f0000 (64-bit, prefetchable) [size=64K]
[virtual] Expansion ROM at f0000000 [disabled] [size=512K]
Capabilities: [48] Power Management version 3
Capabilities: [50] Vital Product Data
Capabilities: [a0] MSI-X: Enable+ Count=32 Masked-
Capabilities: [ac] Express Endpoint, MSI 00
Capabilities: [100] Advanced Error Reporting
Capabilities: [13c] Device Serial Number 5c-b9-01-ff-fe-de-1d-f0
Capabilities: [150] Power Budgeting <?>
Capabilities: [160] Virtual Channel
Capabilities: [1b8] Alternative Routing-ID Interpretation (ARI)
Capabilities: [1c0] Single Root I/O Virtualization (SR-IOV)
Capabilities: [220] #15
Capabilities: [300] #19
Kernel driver in use: bnx2x

Running Versions

proxmox-ve: 4.2-56 (running kernel: 4.4.13-1-pve)
pve-manager: 4.2-15 (running version: 4.2-15/6669ad2c)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.13-1-pve: 4.4.13-56
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-42
qemu-server: 4.0-83
pve-firmware: 1.1-8
libpve-common-perl: 4.0-70
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-55
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-19
pve-container: 1.0-70
pve-firewall: 2.0-29
pve-ha-manager: 1.0-32
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5.7-pve10~bpo80
 
I also upgraded a host from proxmox 3 to 4 and encounter the same problem with the Broadcom/Qlogic BCM57840 NetXtreme II 10 Gigabit Ethernet (rev 11) using the bnx2x

proxmox-ve: 4.2-58 (running kernel: 4.4.13-2-pve)
pve-manager: 4.2-17 (running version: 4.2-17/e1400248)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.13-2-pve: 4.4.13-58
lvm2: 2.02.116-pve2
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-43
qemu-server: 4.0-85
pve-firmware: 1.1-8
libpve-common-perl: 4.0-71
libpve-access-control: 4.0-18
libpve-storage-perl: 4.0-56
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-qemu-kvm: 2.6-1
pve-container: 1.0-71
pve-firewall: 2.0-29
pve-ha-manager: 1.0-33
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5.7-pve10~bpo80
 
small update: I don't have the problem when bonding 2 interfaces together through LACP. A reboot then works fine.
 
Hi all,

I have the same problem with a new server, a Dell PE R730, where I replaced the default netword card by the Broadcom 57800 two ports 10 Gb Base T + two pors 1 Gb Base-T :

Code:
# lspci | grep Broadcom
01:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM57800 1/10 Gigabit Ethernet (rev 10)
01:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM57800 1/10 Gigabit Ethernet (rev 10)
01:00.2 Ethernet controller: Broadcom Corporation NetXtreme II BCM57800 1/10 Gigabit Ethernet (rev 10)
01:00.3 Ethernet controller: Broadcom Corporation NetXtreme II BCM57800 1/10 Gigabit Ethernet (rev 10)
03:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10)
03:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10)

It hangs ar halt/reboot with the same message :
"a stop job is running for lsb raise network interfaces (time dispalyed/ no limit)"

Code:
# pveversion
pve-manager/4.2-18/158720b9 (running kernel: 4.4.16-1-pve)
 
Im in to report the same issue with proxmox 4+. Every single HP DL 380 Gen9 with 10G Broadcom cards has this issue and ive yet to resolve.
 
Last edited:
We had the same problem. This seams like a kernel bug but we could solve it moving to openvswitch from "legacy bridges". Actually we wanted to switch to openvswitch some time ago, now we gave it a try and reboots work again without a problem. Maybe you might give this a try?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!