TUN TAP limitation, same port, different container, same host?

iMx

Member
Feb 11, 2019
19
0
6
42
Hi there,

I have a 2 node cluster, 2 LXC containers, setup and running an OpenVPN server on TCP 443 for each.

Each container works fine individually (stop 1, start the other, etc) and when the containers are on separate nodes of the cluster. Each container has it's own unique IP address. However, if both containers are running on the same node OpenVPN will not connect on either container.

If I change the OpenVPN server port on 1 container to (for example) 993, both containers can then co-exist on the same node (even using the same tunxxx specified interface).

Firewall rules have been removed/stripped back. Even tried specifying different tunxxx interfaces per container, this made no difference. Each OpenVPN server has a 'local' line set, to bind to only the IP address of the container.

Am I missing something? Or is this a known limitation of containers with tun/tap sharing ports? Stumped.

Cheers,
 
Last edited:
Or is this a known limitation of containers with tun/tap sharing ports?

In general, I'd say no. Im running multiple OpenVPN endpoints on a single-node PVE host without any problem, but I cannot guarantee that they're running on the same port, I have to check when I have access to them at home.

However, if both containers are running on the same node OpenVPN will not connect on either container.

Are there problems logged? Have you analysed with tcpdump/wireshark?
 
Are there problems logged? Have you analysed with tcpdump/wireshark?

Other than the generic OpenVPN AUTH_FAILED rejection, no. Yes, packets end up at the correct container.

I suspect this is going to be something to do with the RADIUS plugin and/or FreeRADIUS server. I'll need to run some tests with regular certs etc next to remove that from the equation. Although to get things up and running for now, I think I'm going to move to a VM - running behind on a project.

No rejection is seen on the RADIUS server itself, running in the same container as well, so I'm inclined to believe it's a problem with the OpenVPN RADIUS plugin.
 
Code:
16:51:37.811153 IP radius.35619 > radius.radius: RADIUS, Access-Request (1), id: 0x07 length: 134
16:51:37.816497 IP radius.radius > radius.35619: RADIUS, Access-Accept (2), id: 0x07 length: 20

When running just 1 container on the host, the RADIUS auth requests are seen on the loopback. When running 2 at the same time... nada (on either container).

If I then shut down both containers and start just 1, see the RADIUS auth requests again. Running the containers on separate nodes, all is fine.

Looks to be an OpenVPN RADIUS plugin issue to me.
 
Last edited:
As a last ditch attempt I converted to an unprivileged container... and it works as expected. Figured privileged would be easier to start with....seems not!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!