[SOLVED] Kubernetes : sharing of /dev/kmsg with the container

Blais

Member
Mar 28, 2017
27
1
8
Good morning, everyone.

It's all in the title,

I would like the LXC machine to have access to /dev/kmsg, a prerequisite for kubeletes to launch, an idea, it would be welcome please.

Below which leads me to believe that mounting kmsg in the container will work :

Dec 12 20:48:09 master kubelet[69]: I1212 20:48:09.561091 69 server.go:1113] Started kubelet
Dec 12 20:48:09 master kubelet[69]: E1212 20:48:09.561429 69 kubelet.go:1302] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
Dec 12 20:48:09 master kubelet[69]: I1212 20:48:09.562173 69 server.go:143] Starting to listen on 0.0.0.0:10250
Dec 12 20:48:09 master kubelet[69]: F1212 20:48:09.562349 69 kubelet.go:1413] failed to start OOM watcher open /dev/kmsg: no such file or directory
Dec 12 20:48:09 master systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
Dec 12 20:48:09 master systemd[1]: kubelet.service: Failed with result 'exit-code'.



Unless I'm on the wrong path, if a kubernetes pro wants to help me get up to competency,: D

Am I on the right path?



Thank you in advance.

Sincerely.

Julien B.
 
Last edited:

Blais

Member
Mar 28, 2017
27
1
8
All right,

I searched, I found, well, it's a subject that's been bothering me for at least a month, hence a bottle in the sea on this forum.

I'm on the right track.

In fact, there is an option not present in proxmox but in lxc, lxc.kmsg = 1.
However, we'll have to do without it.

If you look carefully, there is a workaround, make a link from /dev/console to /dev/kmsg.


I had to switch the container to privileged mode.


Oep, kubernetes is running in an lxc container.

Sincerely.

Julien B.
 

ChrisWorks

New Member
Jul 25, 2019
4
0
1
Hi Julien,
I am still struggeling to get kubernetes working inside lxc containers. Would you mind sharing the complete details on how you achieved a working kubernetes installation inside lxc?
Sincerely
Chris
 

Blais

Member
Mar 28, 2017
27
1
8
Hi Chris,

To say that I ran kubernetes around in an LXC container would be pretentious!

The subject of this discussion is "sharing of /dev/kmsg with the container".

For this problem, I found a solution. Unfortunately, running kubernetes in an LXC container, with the current Proxmox kernel, I did not succeed.

Julien B.
 

Blais

Member
Mar 28, 2017
27
1
8
Hi Chris,

I have good news, I was able to get a node master running in Debian.

I was able to find good information on the following link:

https://medium.com/@kvaps/run-kubernetes-in-lxc-container-e04aa94b6c9c

There are a lot of little things to do, and probably among this configuration, there are useless things:
The privileged container must have in its conf file :

lxc.apparmor.profile: unconfined
lxc.cap.drop:
lxc.cgroup.devices.allow: a
lxc.mount.auto: proc:rw sys:rw


Do not forget to load the following modules into PVE :

overlay
br_netfilter


In the container, copy the file, yes, the one from your favorite hypervisor :

/boot/config-5.3.13-2-pve

Then pass the attached script. The prerequisite for this script, you need a non-root user! So, please fill him in the script, make sure he has sudo rights.

Finally, a little reboot.

Let's say you want flannel as cni, with your user pass the following commands:

export POD_CIDR="10.244.0.0/16"
sudo kubeadm init --pod-network-cidr=$POD_CIDR

sudo chown $(id -u):$(id -g) $HOME/
sudo rm $HOME/.kube -Rf
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl apply -f
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

To clean your old kubernetes installations, use the following commands:

kubeadm reset -f
systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush

rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
rm -rf $HOME/.kube
systemctl start kubelet
systemctl start docker
reboot


Good config!

Yours sincerely,

Julien B.
 

Attachments

  • k8s-lxc-install.txt
    2.3 KB · Views: 48
Last edited:
  • Like
Reactions: jebbam

ChrisWorks

New Member
Jul 25, 2019
4
0
1
Hi Julien,
thanks for your input. I will try to get working for my self within the next few days. I've stumbled across that article on medium before but maybe the combination with your proposed changes leads to a working solution. If so i will likely build an ansible script for deployment. (Just much easier when you have to reinstall the hole container from scratch on another host.
Sincerely
Chris
 
hi,

I'm having issues trying to run kubernetes in lxc container since it doesnt load the module br_netfilter
Can you tell me how did you get the module to load, since i cant do that?
I'm using kernel 5.3.18-3-pve with last stable proxmox install
 

francoisd

Member
Sep 10, 2009
19
0
21
Hi,

You must load all required modules on the host (the proxmox server), and all the loaded modules will be available in the containers.

my /etc/modules :
Code:
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
overlay
aufs
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
br_netfilter
rbd
 
hi,
i could install kubernetes using the kubeadm tool but i have a little problem, since we cant access to the /dev/kmsg as you sugested
i made a simlink to /dev/console but as soon as i reboot the machine the cluster doesnt start and that is becouse the ln is removed and i have to create the simplink again

another thing is that i created the cluster with a single master and when i get the nodes they all are not ready
and the pods for the kube-proxy are still in containercreation

i remembered that when i made the cluster with vm the cluster became redy almost in 2 minutes but now i'm waiting almost 60 minutes and the nodes arent ready yet
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!