OVH Failover Cant Access Containers

lps90

Member
May 21, 2020
211
10
23
Hi guys

I'm new with proxmox and i need some information.
I installed 2 Debian 10 x64 LXC containers and i managed to configure part of the network.

Some Info:
Proxmox 6.2-4
2 ipv4
1 ipv6

Host ipv4 ping: working
Host ipv6 ping: working
LXC ipv4 ping: working
LXC ipv6 ping: not working
Connection to containers (ssh): not working

So, i have 2 problems:
-I can not access my LXC containers externally using ssh (ssh is properly configured...)
-LXC containers ipv6 not pinging

I'll give more details...

HOST ( /etc/network/interfaces )
Code:
# loopback
auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

# Interface 1 (Main IPv4)
auto eno1
iface eno1 inet static
    address 37.xxx.90.84/24
    gateway 37.xxx.90.254

# Interface 1 (IPv6)
iface eno1 inet6 static
    address 2001:xxxx:a:3d54::/64
    gateway 2001:xxxx:a:3dff:ff:ff:ff:ff
    post-up /sbin/ip -f inet6 route add 2001:xxxx:a:3dff:ff:ff:ff:ff dev eno1
    post-up /sbin/ip -f inet6 route add default via 2001:xxxx:a:3dff:ff:ff:ff:ff
    pre-down /sbin/ip -f inet6 route del 2001:xxxx:a:3dff:ff:ff:ff:ff dev eno1
    pre-down /sbin/ip -f inet6 route del default via 2001:xxxx:a:3dff:ff:ff:ff:ff

# IPv4 Failover
auto eno1:0
iface eno1:0 inet static
    address 87.xxx.82.123/24

# IPv4 Bridge 1
auto vmbr0
iface vmbr0 inet static
    address 192.168.1.1/24
    bridge-ports none
    bridge-stp off
    bridge-fd 0

# IPv6 Bridge 1
iface vmbr0 inet6 static
    address 2001:xxxx:a:3d54::2
    netmask 64
    post-up /sbin/ip -f inet6 route add 2001:xxxx:a:3d54::/64 dev vmbr0
    pre-down /sbin/ip -f inet6 route del 2001:xxxx:a:3d54::/64 dev vmbr0

# IPv4 Bridge 2
auto vmbr1
iface vmbr1 inet static
    address 192.168.2.1/24
    bridge-ports none
    bridge-stp off
    bridge-fd 0

# IPv6 Bridge 2
iface vmbr1 inet6 static
    address 2001:xxxx:a:3d54::3
    netmask 64
    post-up /sbin/ip -f inet6 route add 2001:xxxx:a:3d54::/64 dev vmbr1
    pre-down /sbin/ip -f inet6 route del 2001:xxxx:a:3d54::/64 dev vmbr1

    post-up sysctl -p
    post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    post-up iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o eno1 -j SNAT --to-source 37.xxx.90.84
    post-up iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o eno1 -j SNAT --to-source 87.xxx.82.123


LXC CONTAINER 1 ( /etc/network/interfaces )
Code:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet static
        address 192.168.1.2
        netmask 255.255.255.0
        gateway 192.168.1.1


LXC CONTAINER 2 ( /etc/network/interfaces )
Code:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet static
        address 192.168.2.2
        netmask 255.255.255.0
        gateway 192.168.2.1


Anybody can explain me where i am not configuring correctly? :rolleyes:
Ipv6 is not important, the most important thing is WHY I CANT ACCESS my containers from outside?
I cant find a solution...

Thanks
 
Last edited:
Yes.
I have added the nat ssh rule to the host firewall.

Where can i find the "pct config" ?
 
  • Like
Reactions: lps90
HOST IPTABLES RULES:
Code:
-A PREROUTING -i vmbr0 -p tcp -m tcp --dport 69 -j DNAT --to-destination 192.168.1.2:69     (SSH)
-A PREROUTING -i vmbr1 -p tcp -m tcp --dport 70 -j DNAT --to-destination 192.168.2.2:70     (SSH)
-A POSTROUTING -s 192.168.1.0/24 -o eno1 -j SNAT --to-source 37.xxx.90.84
-A POSTROUTING -s 192.168.2.0/24 -o eno1 -j SNAT --to-source 87.xxx.82.123


LXC CONTAINER 1:
Code:
arch: amd64
cores: 4
hostname: lxccontainer1
memory: 8192
net0: name=eno1,bridge=vmbr0,firewall=1,gw=192.168.1.1,hwaddr=1A:53:35:49:9A:EE,ip=192.168.1.2/24,type=veth
onboot: 1
ostype: debian
rootfs: local:101/vm-101-disk-0.raw,size=100G
swap: 1024
unprivileged: 1


LXC CONTAINER 2:
Code:
arch: amd64
cores: 4
hostname: lxccontainer2
memory: 8192
net0: name=eno1,bridge=vmbr1,firewall=1,gw=192.168.2.1,hwaddr=3A:FF:C6:1A:B2:6D,ip=192.168.2.2/24,type=veth
onboot: 1
ostype: debian
rootfs: local:110/vm-110-disk-0.raw,size=100G
swap: 1024
unprivileged: 1
 
Try to edit ipv6 in the GUI for LXC go to Node -> Lxc -> Network -> Edit
The real problem is: i can not access my containers from outside.
Can not connect using ssh.
(Forget ipv6 for now, thats really not the main problem).
 
UPDATE:
I solved 50% of the problem by adding "bridge-ports eno1" to vmbr0.
Now the main ip container can be accessed by ssh.

How can i do the same for vmbr1?
Cause if i add "bridge-ports eno1" to vmbr1 the dedicated server network will not boot xD

Any way?
 
guys?
It is the only thing that i need to know to have the failover ip configured in LXC container 2 with outside access...
 
which ip address you will use to access you LXC container?
you're asking the external ip i will use to access the container?
Cause i already said what is the ip in the information i published in this topic before.

HOST IPTABLES RULES:
Code:
-A PREROUTING -i vmbr0 -p tcp -m tcp --dport 69 -j DNAT --to-destination 192.168.1.2:69     (SSH)
-A PREROUTING -i vmbr1 -p tcp -m tcp --dport 70 -j DNAT --to-destination 192.168.2.2:70     (SSH)
-A POSTROUTING -s 192.168.1.0/24 -o eno1 -j SNAT --to-source 37.xxx.90.84
-A POSTROUTING -s 192.168.2.0/24 -o eno1 -j SNAT --to-source 87.xxx.82.123

I'll use the 87.xxx.82.123 ip to access the container.
 
you have two different Networks (IP Range).
you need a router to manage the IP Ranges!
if your PC and the container are in the same ip range do you got access?
it is possible to ping the destination IP Address, or the Destination IP Gateway?
 
you have two different Networks (IP Range).
you need a router to manage the IP Ranges!
if your PC and the container are in the same ip range do you got access?
it is possible to ping the destination IP Address, or the Destination IP Gateway?
Did you read all the topic with attention? :rolleyes:
All the info you're talking about is described.

I can ping everything i just can not access LXC container 2 using ssh...
 
i could be wrong but the SSH Port is 22 your SSH Server know to listen on port 69 or 70
The ssh port can be edited, i edited all my ssh ports (host and containers).
ssh configuration is not the problem.

The problem could be the iptables rules or network configuration..
 
I am having a similar problem.

I have narrowed it down to being an issue with the subnet mask. It seems to be forced to a /32 regardless of what IP I put in on the LXC container config.

Example:

Container config is:
IP 172.16.1.16/24
GW 172.16.1.1

Container ifconfig shows a subnet mask of 255.255.255.255.

The network config is in /etc/systemd/network/eth0.network and specifies that the configuration is managed by PVE. It shows the proper address of 172.16.1.16/24. If you create a proper eth0.network, it is overwritten by PVE on reboot.

The only workaround at the moment is to create a separate Netplan config and running netplan apply after the container boots.

Does anyone know of a better way to do this?

In my case, I'm already using a vmbr0 bridged interface. It's also been a while since I built the container. It's entirely possible that I originally deployed it as 172.16.1.16/32. Is there somewhere I can change the default subnet mask for the network range?
 
root@pve:/etc/pve/nodes/pve/lxc# cat 101.conf
arch: amd64
cores: 1
hostname: lxc2
memory: 2048
nameserver: 1.1.1.1 1.0.0.1
net0: name=eth0,bridge=vmbr0,firewall=1,gw=172.16.1.1,hwaddr=72:60:86:C9:1F:EB,ip=172.16.2.16/24,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-zfs1:subvol-101-disk-0,size=8G
searchdomain: domain.local
swap: 2048
unprivileged: 1


root@lxc2:/etc/systemd/network# cat eth0.network
[Match]
Name = eth0

[Network]
Description = Interface eth0 autoconfigured by PVE
Address = 172.16.1.16/24
Gateway = 172.16.1.1
DHCP = no
IPv6AcceptRA = false


root@lxc2:/etc/systemd/network# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.16.1.16 netmask 255.255.255.255 broadcast 172.16.1.16
inet6 fe80::7060:76ef:fdc8:eca prefixlen 64 scopeid 0x20<link>
ether 72:60:86:c9:1f:eb txqueuelen 1000 (Ethernet)
RX packets 65144 bytes 4890625 (4.8 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5176 bytes 391263 (391.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


It looks like it thinks the address is 172.16.1.16/32, not /24.
 
It seems the answer is, if you created the LXC NIC with a /32, it will always fail back to that, no matter what you do.

You need to remove the NIC, create a new one, and set the proper subnet mask at time of create.

In this case, the original NIC was created with a /32, which was wrong.
Delete and re-create with /24 and all is good.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!