[SOLVED] new kernel and network problem

Jacky Li

Member
Jan 15, 2019
48
2
13
49
Hi,

I updated my PVE cluster of three nodes. Two of the nodes rebooted fine with the new kernel 5.3. Those two are connected to the cisco switch switches. The last node is on a HP procurve switch. The new kernel 5.3 on the last node caused everything on the switch to drop out. I have no problem with the 5.0 kernel. I wonder what changes between the two or if something is wrong with my configuration on the OVS.

I have openvswitch-switch 2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12+deb10u1 installed and here is my network configuration:

auto lo
iface lo inet loopback

auto vmbr0
allow-ovs vmbr0
iface vmbr0 inet manual
ovs_type OVSBridge
ovs_ports eno1 vlan1 vlan20 vlan21 vlan30 vlan40 vlan50 vlan1030 vlan2152 vlan2153 vlan2154 vlan2155 vlan2156 vlan2157 vlan2158
mtu 9000

auto eno1
allow-vmbr0 eno1
iface eno1 inet manual
ovs_bridge vmbr0
ovs_type OVSPort
ovs_options tag=1 vlan_mode=native-untagged
mtu 9000

allow-vmbr0 vlan2156
iface vlan2156 inet static
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=2156
ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
address 172.16.210.34
netmask 255.255.255.0
gateway 172.16.210.1
mtu 1500

# NFS communication vlan
allow-vmbr0 vlan50
iface vlan50 inet static
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=50
ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
address 10.10.50.17
netmask 255.255.255.0
mtu 1500


proxmox-ve: 6.1-2 (running kernel: 5.0.21-5-pve)
pve-manager: 6.1-3 (running version: 6.1-3/37248ce6)
pve-kernel-5.3: 6.0-12
pve-kernel-helper: 6.0-12
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-2
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
openvswitch-switch: 2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12+deb10u1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-14
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191002-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-2
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2

Any ideas? Thank you.

Jacky
 
Last edited:
Hi,

I changed my network and not using the openvswitch and installed ifupdown2.

auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

auto vmbr0.2156
iface vmbr0.2156 inet static
address 172.16.210.34
netmask 24
gateway 172.16.210.1

auto vmbr0.50
iface vmbr0.50 inet static
address 10.10.50.17
netmask 24

auto vmbr0
iface vmbr0 inet static
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 20,21,30,40,50,1030,2152-2158

It is a lot cleaner and easier to read. I am seeing one problem. The node is greyed out but everything is working when running the latest 5.3 kernel but it is good when running the 5.0 kernel. This node is connect via the HP Procurve blade switch. I have no problems with the other nodes connect via the cisco switches.

root@heppvee:~# omping -c 10000 -i 0.001 -F -q heppvea heppvee heppvef
heppvea : waiting for response msg
heppvef : waiting for response msg
heppvea : joined (S,G) = (*, 232.43.211.234), pinging
heppvef : joined (S,G) = (*, 232.43.211.234), pinging
heppvef : waiting for response msg
heppvef : server told us to stop
heppvea : waiting for response msg
heppvea : server told us to stop

heppvea : unicast, xmt/rcv/%loss = 8913/8913/0%, min/avg/max/std-dev = 0.094/0.182/0.629/0.043
heppvea : multicast, xmt/rcv/%loss = 8913/8913/0%, min/avg/max/std-dev = 0.106/0.206/0.651/0.042
heppvef : unicast, xmt/rcv/%loss = 8493/8493/0%, min/avg/max/std-dev = 0.064/0.188/0.616/0.045
heppvef : multicast, xmt/rcv/%loss = 8493/8493/0%, min/avg/max/std-dev = 0.080/0.203/0.615/0.044
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!