lost network after 'kubeadm init'

huky

Active Member
Jul 1, 2016
48
1
28
39
Chongqing, China
I want to try Kubernetes on a cluster including three pve 6.2.
after run 'kubeadm init', the netwrok is lost.
I reboot and purge all kube* ,but the network still not start.
I use openvswitch, and then I configure with command, the network seen be started, but i can not ping other ip.
how could I recovery my network?
 

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
5,966
398
103
Hi,

it makes no sense to install Kubernetes on Proxmox VE.
Both configure your host and this interferes with each other.
If you like to test it use a VM or CT.

To solve your problem purge Kubernetes from Proxmox VE host.
Then you must check the network config it is located at /etc/network/interfaces
 

huky

Active Member
Jul 1, 2016
48
1
28
39
Chongqing, China
Hi,

it makes no sense to install Kubernetes on Proxmox VE.
Both configure your host and this interferes with each other.
If you like to test it use a VM or CT.

To solve your problem purge Kubernetes from Proxmox VE host.
Then you must check the network config it is located at /etc/network/interfaces
I had run docker on PVE.
So I wont try k8s.
I have purged Kubernetes from PVE.
but the network is still failed.
 

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
5,966
398
103
Can you send the output of
Code:
pveversion -v
ip a
cat /etc/network/interfaces
If there are any public IP in the output you should mask them.
 

huky

Active Member
Jul 1, 2016
48
1
28
39
Chongqing, China
there are three node(node11, node12,node13) with the same config
the cluster have run about one year.
afert 'kubeadm init' on node11, all of ovs network lost

root@node011:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.41-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-2
pve-kernel-helper: 6.2-2
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph: 14.2.9-pve1
ceph-fuse: 14.2.9-pve1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 2.0.1-1+pve8
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-6
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1
it's my ovs config:
# cat interfaces
auto lo
iface lo inet loopback

allow-vmbr1 bond0
iface bond0 inet manual
ovs_bonds enp9s0 enp10s0
ovs_type OVSBond
ovs_bridge vmbr1
ovs_options bond_mode=balance-slb lacp=active
pre-up ( ip link set mtu 9000 dev enp9s0 && ip link set mtu 9000 dev enp10s0 )
mtu 9000

auto vmbr1
iface vmbr1 inet manual
ovs_type OVSBridge
ovs_ports bond0 vlan254 vlan50

allow-vmbr1 vlan50
iface vlan50 inet static
address 192.168.10.11
netmask 255.255.255.0
gateway 192.168.10.254
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_options tag=50
ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
mtu 1500

allow-vmbr1 vlan254
iface vlan254 inet static
address 172.31.254.11
netmask 255.255.255.0
gateway 172.31.254.254
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_options tag=254
ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
mtu 1500
now i have to use linux bridge/bond/vlan:
auto lo
iface lo inet loopback

iface enp9s0 inet manual

iface enp5s0f0 inet manual

iface enp10s0 inet manual

iface enp5s0f1 inet manual

auto bond0
iface bond0 inet manual
bond-slaves enp10s0 enp9s0
bond-miimon 100
bond-mode 802.3ad
mtu 9000
pre-up ( ip link set mtu 9000 dev enp10s0 && ip link set mtu 9000 dev enp9s0 )

auto vmbr1
iface vmbr1 inet manual
bridge-ports bond0
bridge-stp off
bridge-fd 0
mtu 9000

auto vmbr1.50
iface vmbr1.50 inet static
address 192.168.10.11/24
mtu 1500

auto vmbr1.254
iface vmbr1.254 inet static
address 172.31.254.11/24
gateway 172.31.254.254
mtu 1500
there is still a distressed confused problem:
node12 and node13 are working normally, node11 could ONLY ping self ip address(192.168.10.11 172.31.254.11), and the networks for vm and ct on node11 are working normallyo_O

root@node012:~# ping 172.31.254.254
PING 172.31.254.254 (172.31.254.254) 56(84) bytes of data.
64 bytes from 172.31.254.254: icmp_seq=1 ttl=64 time=0.516 ms
64 bytes from 172.31.254.254: icmp_seq=2 ttl=64 time=0.514 ms
64 bytes from 172.31.254.254: icmp_seq=3 ttl=64 time=0.513 ms
64 bytes from 172.31.254.254: icmp_seq=4 ttl=64 time=0.517 ms
^C
--- 172.31.254.254 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 75ms
rtt min/avg/max/mdev = 0.513/0.515/0.517/0.001 ms

root@node011:~# ping 172.31.254.254
PING 172.31.254.254 (172.31.254.254) 56(84) bytes of data.
From 172.31.254.11 icmp_seq=1 Destination Host Unreachable
From 172.31.254.11 icmp_seq=2 Destination Host Unreachable
From 172.31.254.11 icmp_seq=3 Destination Host Unreachable
From 172.31.254.11 icmp_seq=4 Destination Host Unreachable
^C
--- 172.31.254.254 ping statistics ---
5 packets transmitted, 0 received, +4 errors, 100% packet loss, time 25ms
pipe 4

ovpnf20 is a vm on node11
[root@ovpnf20 ~]# ping 172.31.254.254
PING 172.31.254.254 (172.31.254.254) 56(84) bytes of data.
64 bytes from 172.31.254.254: icmp_seq=1 ttl=64 time=1.77 ms
64 bytes from 172.31.254.254: icmp_seq=2 ttl=64 time=0.795 ms
64 bytes from 172.31.254.254: icmp_seq=3 ttl=64 time=0.769 ms
64 bytes from 172.31.254.254: icmp_seq=4 ttl=64 time=0.822 ms
64 bytes from 172.31.254.254: icmp_seq=5 ttl=64 time=0.774 ms
^C
--- 172.31.254.254 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4004ms
rtt min/avg/max/mdev = 0.769/0.987/1.777/0.396 ms
 
Last edited:

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
5,966
398
103
This sounds like a routing problem.
The VM/CT does not care about this setting as long L2 works.
What does this command tell you?

Code:
ip r
 

huky

Active Member
Jul 1, 2016
48
1
28
39
Chongqing, China
node12 lost today. like node11. i had add ceph network(10.10.11.11-13/24) into corosync.conf.
root@node011:~# arp
Address HWtype HWaddress Flags Mask Iface
172.31.254.13 (incomplete) vmbr1.254
10.10.11.12 ether a0:36:9f:61:5e:ce C vmbr9
10.10.11.13 ether a0:36:9f:5a:bb:4e C vmbr9
172.31.254.252 (incomplete) vmbr1.254
192.168.10.62 (incomplete) vmbr1.50
10.205.1.20 ether 00:11:25:bf:fb:5b C vmbr1.100
10.205.1.2 ether 00:1c:0f:5d:6b:80 C vmbr1.100
172.31.254.254 (incomplete) vmbr1.254
10.205.1.21 ether 00:11:25:bf:fb:02 C vmbr1.100
172.31.254.12 (incomplete) vmbr1.254
root@node011:~# ip ro
default via 172.31.254.254 dev vmbr1.254 proto kernel onlink
10.10.11.0/24 dev vmbr9 proto kernel scope link src 10.10.11.11
10.205.1.0/24 dev vmbr1.100 proto kernel scope link src 10.205.1.201
10.205.63.0/24 dev vmbr1.63 proto kernel scope link src 10.205.63.201
172.31.254.0/24 dev vmbr1.254 proto kernel scope link src 172.31.254.11
192.168.10.0/24 dev vmbr1.50 proto kernel scope link src 192.168.10.11
root@node012:~# arp
Address HWtype HWaddress Flags Mask Iface
192.168.10.62 (incomplete) vmbr1.50
172.31.254.99 ether a6:eb:f8:91:d8:e4 C vmbr1.254
10.10.11.13 ether a0:36:9f:5a:bb:4e C vmbr9
172.31.254.13 (incomplete) vmbr1.254
192.168.10.252 (incomplete) vmbr1.50
192.168.10.1 (incomplete) vmbr1.50
172.31.254.252 ether 58:69:6c:33:cb:5f C vmbr1.254
192.168.10.254 (incomplete) vmbr1.50
172.31.254.254 (incomplete) vmbr1.254
10.10.11.11 ether a0:36:9f:5a:ba:ee C vmbr9
172.31.254.11 (incomplete) vmbr1.254
172.31.254.100 (incomplete) vmbr1.254
192.168.10.120 (incomplete) vmbr1.50
192.168.10.253 (incomplete) vmbr1.50
172.31.254.101 (incomplete) vmbr1.254
root@node012:~# ip ro
default via 172.31.254.254 dev vmbr1.254 proto kernel onlink
10.10.11.0/24 dev vmbr9 proto kernel scope link src 10.10.11.12
10.205.1.0/24 dev vmbr1.100 proto kernel scope link src 10.205.1.202
10.205.63.0/24 dev vmbr1.63 proto kernel scope link src 10.205.63.202
172.31.254.0/24 dev vmbr1.254 proto kernel scope link src 172.31.254.12
192.168.10.0/24 dev vmbr1.50 proto kernel scope link src 192.168.10.12
root@node013:~# arp
Address HWtype HWaddress Flags Mask Iface
172.31.254.11 ether 30:5a:3a:75:d2:d8 C vmbr1.254
172.31.254.100 ether 56:34:3f:6f:bf:57 C vmbr1.254
10.10.11.11 ether a0:36:9f:5a:ba:ee C vmbr9
172.31.254.12 ether 30:5a:3a:74:ed:f3 C vmbr1.254
10.10.11.12 ether a0:36:9f:61:5e:ce C vmbr9
172.31.254.254 ether 00:00:5e:00:01:fe C vmbr1.254
192.168.10.62 (incomplete) vmbr1.50
root@node013:~# ip ro
default via 172.31.254.254 dev vmbr1.254 proto kernel onlink
10.10.11.0/24 dev vmbr9 proto kernel scope link src 10.10.11.13
10.205.1.0/24 dev vmbr1.100 proto kernel scope link src 10.205.1.203
10.205.63.0/24 dev vmbr1.63 proto kernel scope link src 10.205.63.203
172.31.254.0/24 dev vmbr1.254 proto kernel scope link src 172.31.254.13
192.168.10.0/24 dev vmbr1.50 proto kernel scope link src 192.168.10.13
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!