[SOLVED] Can't connect to duplicated LXC container

Mario Staykov

New Member
Oct 9, 2017
3
0
1
30
Hi. I hope there's someone who can spot something probably obvious to them, but which I keep missing for days and is keeping me from even getting to the actual task at hand.

We use LXC containers for our Jira system. In order to test updates to it, I set out to duplicate the container and create a testing environment. I used Backup + Restore features of Proxmox to restore a copy of the original Jira container into a new one (with an ID one more than the current highest). Before booting up the container, I changed only two things:
  1. Modified the MAC address to end in different (arbitrary decided) characters, such as xx:xx:xx:b0:f3:49
  2. Incremented the IP address by one, resulting in an IP of 5.196.200.97/32
I kept the gateway and the bridge without change, reasoning that all other containers use the same setup successfully. The container firewall is disabled to begin with.

The original Jira instance responds to SSH and HTTP connections, but I can't seem to reach this one with anything. Eventually I figured out I can login to it with "/usr/sbin/pct enter <containerID>", but all looks in order. Here's the output of a few commands ran on that newly spun up container:

ifconfig
Code:
eth0      Link encap:Ethernet  HWaddr 02:00:00:b0:f3:49
          inet addr:5.196.200.97  Bcast:5.196.200.97  Mask:255.255.255.255
          inet6 addr: fe80::ff:feb0:f349/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5445792 errors:0 dropped:0 overruns:0 frame:0
          TX packets:158765 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:344426991 (344.4 MB)  TX bytes:6668442 (6.6 MB)

netstat -atpn
Code:
tcp        0      0 127.0.0.1:17123         0.0.0.0:*               LISTEN      528/python
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      89/rpcbind
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      263/nginx -g daemon
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      11071/sshd
tcp        0      0 0.0.0.0:25              0.0.0.0:*               LISTEN      507/master
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      263/nginx -g daemon
tcp        0      0 127.0.0.1:8126          0.0.0.0:*               LISTEN      527/trace-agent
tcp        0      0 127.0.0.1:55942         127.0.0.1:17123         TIME_WAIT   -
tcp        0      0 127.0.0.1:17123         127.0.0.1:56044         ESTABLISHED 528/python
tcp        0      0 127.0.0.1:54992         127.0.0.1:17123         TIME_WAIT   -
tcp        0      0 127.0.0.1:55558         127.0.0.1:17123         TIME_WAIT   -
tcp        0      0 127.0.0.1:55600         127.0.0.1:17123         TIME_WAIT   -
tcp        0      0 127.0.0.1:55602         127.0.0.1:17123         TIME_WAIT   -
tcp        0      0 127.0.0.1:17123         127.0.0.1:56078         ESTABLISHED 528/python
tcp        0      0 127.0.0.1:55382         127.0.0.1:17123         TIME_WAIT   -
tcp        0      0 127.0.0.1:55842         127.0.0.1:17123         TIME_WAIT   -
tcp        0      0 127.0.0.1:54792         127.0.0.1:17123         TIME_WAIT   -
tcp        0      0 127.0.0.1:56078         127.0.0.1:17123         ESTABLISHED 529/python
tcp        0      0 127.0.0.1:56044         127.0.0.1:17123         ESTABLISHED 532/python
tcp        0      0 127.0.0.1:54970         127.0.0.1:17123         TIME_WAIT   -
tcp        0      0 127.0.0.1:54910         127.0.0.1:17123         TIME_WAIT   -
tcp6       0      0 ::1:17123               :::*                    LISTEN      528/python
tcp6       0      0 127.0.0.1:8005          :::*                    LISTEN      380/java
tcp6       0      0 :::111                  :::*                    LISTEN      89/rpcbind
tcp6       0      0 :::8080                 :::*                    LISTEN      380/java
tcp6       0      0 :::22                   :::*                    LISTEN      11071/sshd
tcp6       0      0 :::25                   :::*                    LISTEN      507/master

ip route
Code:
default via 37.187.173.254 dev eth0
37.187.173.254 dev eth0  scope link

SSH works just fine locally, but it can't be connected to externally, even from the server hosting Proxmox itself:
nc -nv 5.196.200.97 22
Code:
nc: timeout while connecting to 5.196.200.97 22
nc: unable to connect to address 5.196.200.97, service 22

ping 5.196.200.97 -c 5 -w 30
Code:
PING 5.196.200.97 (5.196.200.97) 56(84) bytes of data.

--- 5.196.200.97 ping statistics ---
30 packets transmitted, 0 received, 100% packet loss, time 29231ms

So after everything seeming in order, why can I not connect to the container whatsoever, with it behaving as if it's not even there/fully filtered?

Hope the information I provided was enough and thanks in advance :)
 
could it be that the virtual NIC of the container is attached to the wrong bridge ?
please check with

pct config CTID | grep net0

that the virtual nic is connected to a bridge connected to the outside world

example on my system:

container config
pct config 103 | grep net0
net0: name=eth0,bridge=vmbr0,hwaddr=2A:2B:6D:19:D6:2D,ip=dhcp,type=veth

list of devices connected to my bridge ( note the veth103i0 matching the config entry above)
brctl show vmbr0
bridge name bridge id STP enabled interfaces
vmbr0 8000.5a99b245825a no ens18
tap403i0
tap700i0
veth100i0
veth101i0
veth103i0

if this is all fine, then you should look if ARP requests coming from the outise are comming to the veth0 device:

tcpdump -i veth103i0 arp

should display you all the arp requests and reply from the devices connected to this bridg, like
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on veth103i0, link-type EN10MB (Ethernet), capture size 262144 bytes
10:18:04.338756 ARP, Request who-has 192.168.18.50 tell 192.168.16.1, length 46
10:20:07.199664 ARP, Request who-has 192.168.30.57 tell 192.168.16.1, length 46
10:21:24.372303 ARP, Request who-has 192.168.16.43 tell 192.168.16.1, length 46
10:22:25.971742 ARP, Request who-has 192.168.16.1 tell 192.168.31.178, length 2
 
Hi Manu,

Thanks for the assistance. Here's the output, from which I can't see anything wrong:
Code:
deploy@katrine ~ % sudo pct config 559 | grep net0
net0: name=eth0,bridge=vmbr0,gw=37.187.173.254,hwaddr=02:00:00:b0:f3:49,ip=5.196.200.97/32,type=veth
deploy@katrine ~ % brctl show vmbr0
bridge name    bridge id        STP enabled    interfaces
vmbr0        8000.0cc47a454f9a    no        eth0
                            veth501i0
                            veth502i0
                            veth550i0
                            veth551i0
                            veth552i0
                            veth553i0
                            veth554i0
                            veth555i0
                            veth556i0
                            veth558i0
                            veth559i0
deploy@katrine ~ % sudo tcpdump -i veth559i0 arp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on veth559i0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:36:58.911488 ARP, Request who-has 37.187.173.254 tell 5.196.200.97, length 28
11:37:00.020381 ARP, Request who-has 10.253.67.70 tell 10.253.67.252, length 46
11:37:00.037921 ARP, Request who-has 10.253.66.169 tell 10.253.67.252, length 46
11:37:00.249063 ARP, Request who-has 10.253.66.55 tell 10.253.67.252, length 46
11:37:00.258801 ARP, Request who-has 10.253.66.169 tell 10.253.67.252, length 46

Best regards,
Mario
 
by the way I see that your default gateway is outside your subnetwork
pve is able to autoadjust this by adding a route to the gateway via the configured device

how does the routing table look like on the container ?
you should have something like
default via
37.187.173.254

dev eth0 onlink
37.187.173.254 dev eth0 scope link

is this server hosted ? remember by some provides (OVH) you need to register your new mac address in their admin interface
 
Hi Manu,

The routing table is listed as part of my initial post and looks like what you described.
But the other question you asked was the golden key! You guessed right we are using OVH, and I haven't realized we need to purchase the IPs and register them with a MAC address. Now it makes sense why it doesn't get routed. I will now look to either purchase the extra IP or migrate the container to a different server, where we have a similar purpose container I can overwrite.

I appreciate your assistance, thank you and well spotted :)

Best regards,
Mario
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!