First, I want to just hands up state that my setup is very much unsupported, I know that. But I have been playing with getting proxmox running on the Oracle Free tier ampere server and that works just swell. I want to play with it to see if I can get it running reliably on OCI, with a IPsec tunnel to my in-prem cluster.
Now the problem comes with adding containers. The setup does not support running VMs (as it is running on cloud vps) but running LXC containers works just fine. Networking is a challenge though. I can MASQ to get internet, but I want to expose some of my LXCs to the public IP.
OCI blocks multiple MAC addresses on my VNIC (primary network interface). So I cannot just add a Linux Bridge and bridge that to the main network connection. Any LXCs will get blocked and no network connection established. The best setup so far is the NAT setup like this, that works:
Not sure I need the last two lines, but at least it works. Problem is that to expose the public ports from the LXC container I would need to port-forward from 10.99.0.230 to the container and that gets messy quickly. I guess this has to be plan-b as this should work.
OCI allows me to add new IPs to the VNIC. So I set up a few new IPs, (231 and 232). When I configure one of these like this, it works:
This 231 IP can be reached from outside, but I am not able to add this to the LXC container. I have tried to do the same with a more "normal" config like the below, and that also works:
So I can get IPs attached to the main VNIC as long as I add them as a subinterface, but how can I get that assigned to the LXC? Is that possible in some simple way? If I could get that working I could expose services much more easily. any tips would be invaluable!
Now the problem comes with adding containers. The setup does not support running VMs (as it is running on cloud vps) but running LXC containers works just fine. Networking is a challenge though. I can MASQ to get internet, but I want to expose some of my LXCs to the public IP.
OCI blocks multiple MAC addresses on my VNIC (primary network interface). So I cannot just add a Linux Bridge and bridge that to the main network connection. Any LXCs will get blocked and no network connection established. The best setup so far is the NAT setup like this, that works:
Code:
auto enp0s6
iface enp0s6 inet static
address 10.99.0.230/24
gateway 10.99.0.1
auto vmbr0
iface vmbr0 inet static
address 10.91.0.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s 10.91.0.0/24 -o enp0s6 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s 10.91.0.0/24 -o enp0s6 -j MASQUERADE
post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
Not sure I need the last two lines, but at least it works. Problem is that to expose the public ports from the LXC container I would need to port-forward from 10.99.0.230 to the container and that gets messy quickly. I guess this has to be plan-b as this should work.
OCI allows me to add new IPs to the VNIC. So I set up a few new IPs, (231 and 232). When I configure one of these like this, it works:
Code:
auto enp0s6
iface enp0s6 inet static
address 10.99.0.230/24
gateway 10.99.0.1
bridge-stp off
bridge-fd 0
iface enp0s6:0 inet static
address 10.99.0.231/24
Code:
auto enp0s6
iface enp0s6 inet manual
auto vmbr0
iface vmbr0 inet static
address 10.99.0.230/24
gateway 10.99.0.1
bridge-ports enp0s6
bridge-stp off
bridge-fd 0
iface vmbr0:0 inet static
address 10.99.0.231/24
So I can get IPs attached to the main VNIC as long as I add them as a subinterface, but how can I get that assigned to the LXC? Is that possible in some simple way? If I could get that working I could expose services much more easily. any tips would be invaluable!
Last edited: