lost network after 'kubeadm init'

huky

Active Member
Jul 1, 2016
47
1
28
39
Chongqing, China
I want to try Kubernetes on a cluster including three pve 6.2.
after run 'kubeadm init', the netwrok is lost.
I reboot and purge all kube* ,but the network still not start.
I use openvswitch, and then I configure with command, the network seen be started, but i can not ping other ip.
how could I recovery my network?
 

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
5,963
397
103
Hi,

it makes no sense to install Kubernetes on Proxmox VE.
Both configure your host and this interferes with each other.
If you like to test it use a VM or CT.

To solve your problem purge Kubernetes from Proxmox VE host.
Then you must check the network config it is located at /etc/network/interfaces
 

huky

Active Member
Jul 1, 2016
47
1
28
39
Chongqing, China
Hi,

it makes no sense to install Kubernetes on Proxmox VE.
Both configure your host and this interferes with each other.
If you like to test it use a VM or CT.

To solve your problem purge Kubernetes from Proxmox VE host.
Then you must check the network config it is located at /etc/network/interfaces
I had run docker on PVE.
So I wont try k8s.
I have purged Kubernetes from PVE.
but the network is still failed.
 

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
5,963
397
103
Can you send the output of
Code:
pveversion -v
ip a
cat /etc/network/interfaces
If there are any public IP in the output you should mask them.
 

huky

Active Member
Jul 1, 2016
47
1
28
39
Chongqing, China
there are three node(node11, node12,node13) with the same config
the cluster have run about one year.
afert 'kubeadm init' on node11, all of ovs network lost

root@node011:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.41-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-2
pve-kernel-helper: 6.2-2
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph: 14.2.9-pve1
ceph-fuse: 14.2.9-pve1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 2.0.1-1+pve8
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-6
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1
it's my ovs config:
# cat interfaces
auto lo
iface lo inet loopback

allow-vmbr1 bond0
iface bond0 inet manual
ovs_bonds enp9s0 enp10s0
ovs_type OVSBond
ovs_bridge vmbr1
ovs_options bond_mode=balance-slb lacp=active
pre-up ( ip link set mtu 9000 dev enp9s0 && ip link set mtu 9000 dev enp10s0 )
mtu 9000

auto vmbr1
iface vmbr1 inet manual
ovs_type OVSBridge
ovs_ports bond0 vlan254 vlan50

allow-vmbr1 vlan50
iface vlan50 inet static
address 192.168.10.11
netmask 255.255.255.0
gateway 192.168.10.254
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_options tag=50
ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
mtu 1500

allow-vmbr1 vlan254
iface vlan254 inet static
address 172.31.254.11
netmask 255.255.255.0
gateway 172.31.254.254
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_options tag=254
ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
mtu 1500
now i have to use linux bridge/bond/vlan:
auto lo
iface lo inet loopback

iface enp9s0 inet manual

iface enp5s0f0 inet manual

iface enp10s0 inet manual

iface enp5s0f1 inet manual

auto bond0
iface bond0 inet manual
bond-slaves enp10s0 enp9s0
bond-miimon 100
bond-mode 802.3ad
mtu 9000
pre-up ( ip link set mtu 9000 dev enp10s0 && ip link set mtu 9000 dev enp9s0 )

auto vmbr1
iface vmbr1 inet manual
bridge-ports bond0
bridge-stp off
bridge-fd 0
mtu 9000

auto vmbr1.50
iface vmbr1.50 inet static
address 192.168.10.11/24
mtu 1500

auto vmbr1.254
iface vmbr1.254 inet static
address 172.31.254.11/24
gateway 172.31.254.254
mtu 1500
there is still a distressed confused problem:
node12 and node13 are working normally, node11 could ONLY ping self ip address(192.168.10.11 172.31.254.11), and the networks for vm and ct on node11 are working normallyo_O

root@node012:~# ping 172.31.254.254
PING 172.31.254.254 (172.31.254.254) 56(84) bytes of data.
64 bytes from 172.31.254.254: icmp_seq=1 ttl=64 time=0.516 ms
64 bytes from 172.31.254.254: icmp_seq=2 ttl=64 time=0.514 ms
64 bytes from 172.31.254.254: icmp_seq=3 ttl=64 time=0.513 ms
64 bytes from 172.31.254.254: icmp_seq=4 ttl=64 time=0.517 ms
^C
--- 172.31.254.254 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 75ms
rtt min/avg/max/mdev = 0.513/0.515/0.517/0.001 ms

root@node011:~# ping 172.31.254.254
PING 172.31.254.254 (172.31.254.254) 56(84) bytes of data.
From 172.31.254.11 icmp_seq=1 Destination Host Unreachable
From 172.31.254.11 icmp_seq=2 Destination Host Unreachable
From 172.31.254.11 icmp_seq=3 Destination Host Unreachable
From 172.31.254.11 icmp_seq=4 Destination Host Unreachable
^C
--- 172.31.254.254 ping statistics ---
5 packets transmitted, 0 received, +4 errors, 100% packet loss, time 25ms
pipe 4

ovpnf20 is a vm on node11
[root@ovpnf20 ~]# ping 172.31.254.254
PING 172.31.254.254 (172.31.254.254) 56(84) bytes of data.
64 bytes from 172.31.254.254: icmp_seq=1 ttl=64 time=1.77 ms
64 bytes from 172.31.254.254: icmp_seq=2 ttl=64 time=0.795 ms
64 bytes from 172.31.254.254: icmp_seq=3 ttl=64 time=0.769 ms
64 bytes from 172.31.254.254: icmp_seq=4 ttl=64 time=0.822 ms
64 bytes from 172.31.254.254: icmp_seq=5 ttl=64 time=0.774 ms
^C
--- 172.31.254.254 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4004ms
rtt min/avg/max/mdev = 0.769/0.987/1.777/0.396 ms
 
Last edited:

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
5,963
397
103
This sounds like a routing problem.
The VM/CT does not care about this setting as long L2 works.
What does this command tell you?

Code:
ip r
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!