Hi,
I updated my PVE cluster of three nodes. Two of the nodes rebooted fine with the new kernel 5.3. Those two are connected to the cisco switch switches. The last node is on a HP procurve switch. The new kernel 5.3 on the last node caused everything on the switch to drop out. I have no problem with the 5.0 kernel. I wonder what changes between the two or if something is wrong with my configuration on the OVS.
I have openvswitch-switch 2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12+deb10u1 installed and here is my network configuration:
auto lo
iface lo inet loopback
auto vmbr0
allow-ovs vmbr0
iface vmbr0 inet manual
ovs_type OVSBridge
ovs_ports eno1 vlan1 vlan20 vlan21 vlan30 vlan40 vlan50 vlan1030 vlan2152 vlan2153 vlan2154 vlan2155 vlan2156 vlan2157 vlan2158
mtu 9000
auto eno1
allow-vmbr0 eno1
iface eno1 inet manual
ovs_bridge vmbr0
ovs_type OVSPort
ovs_options tag=1 vlan_mode=native-untagged
mtu 9000
allow-vmbr0 vlan2156
iface vlan2156 inet static
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=2156
ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
address 172.16.210.34
netmask 255.255.255.0
gateway 172.16.210.1
mtu 1500
# NFS communication vlan
allow-vmbr0 vlan50
iface vlan50 inet static
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=50
ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
address 10.10.50.17
netmask 255.255.255.0
mtu 1500
proxmox-ve: 6.1-2 (running kernel: 5.0.21-5-pve)
pve-manager: 6.1-3 (running version: 6.1-3/37248ce6)
pve-kernel-5.3: 6.0-12
pve-kernel-helper: 6.0-12
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-2
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
openvswitch-switch: 2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12+deb10u1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-14
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191002-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-2
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
Any ideas? Thank you.
Jacky
I updated my PVE cluster of three nodes. Two of the nodes rebooted fine with the new kernel 5.3. Those two are connected to the cisco switch switches. The last node is on a HP procurve switch. The new kernel 5.3 on the last node caused everything on the switch to drop out. I have no problem with the 5.0 kernel. I wonder what changes between the two or if something is wrong with my configuration on the OVS.
I have openvswitch-switch 2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12+deb10u1 installed and here is my network configuration:
auto lo
iface lo inet loopback
auto vmbr0
allow-ovs vmbr0
iface vmbr0 inet manual
ovs_type OVSBridge
ovs_ports eno1 vlan1 vlan20 vlan21 vlan30 vlan40 vlan50 vlan1030 vlan2152 vlan2153 vlan2154 vlan2155 vlan2156 vlan2157 vlan2158
mtu 9000
auto eno1
allow-vmbr0 eno1
iface eno1 inet manual
ovs_bridge vmbr0
ovs_type OVSPort
ovs_options tag=1 vlan_mode=native-untagged
mtu 9000
allow-vmbr0 vlan2156
iface vlan2156 inet static
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=2156
ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
address 172.16.210.34
netmask 255.255.255.0
gateway 172.16.210.1
mtu 1500
# NFS communication vlan
allow-vmbr0 vlan50
iface vlan50 inet static
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=50
ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
address 10.10.50.17
netmask 255.255.255.0
mtu 1500
proxmox-ve: 6.1-2 (running kernel: 5.0.21-5-pve)
pve-manager: 6.1-3 (running version: 6.1-3/37248ce6)
pve-kernel-5.3: 6.0-12
pve-kernel-helper: 6.0-12
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-2
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
openvswitch-switch: 2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12+deb10u1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-14
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191002-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-2
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
Any ideas? Thank you.
Jacky
Last edited: