[SOLVED] VMs and LXC containers not connecting to network after initializing docker swarm

mtdev

New Member
Nov 11, 2020
4
0
1
20
Hi there,

I have been running docker on my proxmox server for about a month by now without any issues at all. I wanted to bundle up my server and without much experience with docker swarm, I stupidly enough tried initializing a docker swarm manager host on my proxmox server. Short after I discovered my VMs and LXC containers to b unreachable. This does not hold for the docker containers, they work perfectly fine.

I have already deleted the docker swarm, deleted all the related docker containers and checked basic network configuration, allthough I don't know too much about network configurations in linux server. I can manage them at basic levels.

I would not know what logs, files, or whatever really I can post here to provide you guys with the required information, so I will leave that to the contributors of this thread. One thing that might interest you is the ifconfig result of an Ubuntu server 19.04 LXC container, se here is is. IP address should be 192.168.1.21, obtained from DHCP. Configuring a static IP for the container doens't seem to change the situation. An IP address shows up in the ifconfig output but the container is still unreachable.

1605126914551.png

Kind regards,
Micha de Vries.
 
hi,

can you post the outputs of the following commands:
Code:
cat /etc/network/interfaces
cat /etc/hosts
ip a
ip r
(anonymize where necessary)
 
Thanks for the reply! Here are the outputs:

Code:
root@pve1:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.10
        netmask 255.255.255.0
        gateway 192.168.1.1
        bridge_ports eno1
        bridge_stp off
        bridge_fd 0

root@pve1:~# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.1.10 pve1.domain.com pve1

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

root@pve1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 00:23:24:67:a1:17 brd ff:ff:ff:ff:ff:ff
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:d2:fe:f1:44 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:d2ff:fefe:f144/64 scope link
       valid_lft forever preferred_lft forever
5: docker_gwbridge: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:31:b3:ba:5c brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global docker_gwbridge
       valid_lft forever preferred_lft forever
7: veth14169c9@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 6e:4f:1f:86:ed:12 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::6c4f:1fff:fe86:ed12/64 scope link
       valid_lft forever preferred_lft forever
9: veth13d810b@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 1e:e5:e6:63:48:6b brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::1ce5:e6ff:fe63:486b/64 scope link
       valid_lft forever preferred_lft forever
11: veth213d7e8@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 16:eb:7d:46:da:2a brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::14eb:7dff:fe46:da2a/64 scope link
       valid_lft forever preferred_lft forever
13: vethb2dc706@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether b2:51:b9:c4:06:c3 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::b051:b9ff:fec4:6c3/64 scope link
       valid_lft forever preferred_lft forever
15: veth27a0f4b@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 1a:81:36:60:cf:58 brd ff:ff:ff:ff:ff:ff link-netnsid 4
    inet6 fe80::1881:36ff:fe60:cf58/64 scope link
       valid_lft forever preferred_lft forever
21: tap102i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr102i0 state UNKNOWN group default qlen 1000
    link/ether 3e:22:a4:ea:fa:ca brd ff:ff:ff:ff:ff:ff
22: fwbr102i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 82:2c:b9:c4:3a:7d brd ff:ff:ff:ff:ff:ff
23: fwpr102p0@fwln102i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ca:a3:74:ed:5d:63 brd ff:ff:ff:ff:ff:ff
24: fwln102i0@fwpr102p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr102i0 state UP group default qlen 1000
    link/ether 82:2c:b9:c4:3a:7d brd ff:ff:ff:ff:ff:ff
26: veth2f1246b@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 62:af:f9:44:39:7b brd ff:ff:ff:ff:ff:ff link-netnsid 6
    inet6 fe80::60af:f9ff:fe44:397b/64 scope link
       valid_lft forever preferred_lft forever
28: veth104i0@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr104i0 state UP group default qlen 1000
    link/ether fe:be:6e:92:2b:c3 brd ff:ff:ff:ff:ff:ff link-netnsid 7
29: fwbr104i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d6:f9:fb:c7:9a:94 brd ff:ff:ff:ff:ff:ff
30: fwpr104p0@fwln104i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 7a:79:1a:16:1c:ae brd ff:ff:ff:ff:ff:ff
31: fwln104i0@fwpr104p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr104i0 state UP group default qlen 1000
    link/ether d6:f9:fb:c7:9a:94 brd ff:ff:ff:ff:ff:ff
32: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:23:24:67:a1:17 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.10/24 brd 192.168.1.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::223:24ff:fe67:a117/64 scope link
       valid_lft forever preferred_lft forever
34: veth100i0@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i0 state UP group default qlen 1000
    link/ether fe:db:c1:d2:0e:03 brd ff:ff:ff:ff:ff:ff link-netnsid 5
35: fwbr100i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e2:25:4c:9d:98:2a brd ff:ff:ff:ff:ff:ff
36: fwpr100p0@fwln100i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether aa:eb:fd:b4:17:b3 brd ff:ff:ff:ff:ff:ff
37: fwln100i0@fwpr100p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i0 state UP group default qlen 1000
    link/ether e2:25:4c:9d:98:2a brd ff:ff:ff:ff:ff:ff

root@pve1:~# ip r
default via 192.168.1.1 dev vmbr0 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.18.0.0/16 dev docker_gwbridge proto kernel scope link src 172.18.0.1 linkdown
192.168.1.0/24 dev vmbr0 proto kernel scope link src 192.168.1.10

Let me know what you can find! Remember, docker on itself worked fine, problems only started to occur after initializing the docker swarm, or at least that's the most remarkable event I can think of...

Kind regards,
Micha de Vries
 
is docker still installed? dpkg -l | grep docker

the simplest way to fix things is probably removing the bridge interfaces created by docker, and then removing the docker installation.

make sure no docker CTs are running while you do these:

Code:
docker network ls # this will show you the docker-configured network interfaces
docker network rm docker0
docker network rm docker_gwbridge # if any other show up also delete them

afterwards check ip a && ip r output again to see if any docker-related network interfaces still show up. if yes:
Code:
ip route del 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
ip route del 172.18.0.0/16 dev docker_gwbridge proto kernel scope link src 172.18.0.1

after removing all the network interfaces check if your problem is solved (VM/LXC reachable).
if they're not reachable, i'd suggest to remove the docker package with apt remove docker

hope this helps
 
Thanks! I'll try this solutions once I'm home. This will not remove my docker containers right?
 
Thanks! I'll try this solutions once I'm home. This will not remove my docker containers right?
/var/lib/docker would still contain your images.

however it's generally not recommended to run docker on the PVE host directly, it's better to use a VM for this purpose.
 
Thank you so much! I think the main issue was with the docker_gwbridge network, but to be sure I deleted all of them and everything works as expected again! I will mark the thread as solved.

I do very well know that it is not recommended to run docker on the pve host machine, but I've got my reasoning for it. I'm just 16 years old at this point, and I run all the servers at home which means they cannot consume too much power, and besides that I don't have a big budget for my servers. My PVE host runs on a 4th gen, power efficient (T) i3 cpu. I have tried to run the docker containers inside a VM before but the speeds just simply were unbearable, which is why I now run docker on my main host. TBH just docker isn't a problem appearantly, just don't start with services like docker swarm etc.
 
TBH just docker isn't a problem appearantly, just don't start with services like docker swarm etc.

in essence it can be used - but it's not recommended because of conflicts such as this one :)

i'm glad your problem is solved!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!