Hi,
I'm currently setting up Proxmox on a VPS, with the idea to just use it to run LXC's. I only have 1 public IPv4 address.
Basic install went fine, installed wireguard, firewalled off the hypervisor, the works. I can connect to it using my vpn tunnel, I can download the templates etc.
I wanted to start using some of the SDN functionality, so I went along with the tutorial on the wiki: https://pve.proxmox.com/wiki/Setup_Simple_Zone_With_SNAT_and_DHCP
Created a zone, vnet, subnet. Allowed DNS/DHCP though on the vnet. Started the LXC, it starts, gets an IP from IPAM-pve, it works.
Now where I keep having issues is outgoing connectivity from this container to the internet. Earlier i got it to work, a reboot of the host later it's gone. I don't see any line in why it worked, why it doesn't.
So I'm wondering - do I need to manually enable
ICMP seems to always work.
I am not using nftables.
This is my
My zone config:
My fw:
I'm currently setting up Proxmox on a VPS, with the idea to just use it to run LXC's. I only have 1 public IPv4 address.
Basic install went fine, installed wireguard, firewalled off the hypervisor, the works. I can connect to it using my vpn tunnel, I can download the templates etc.
I wanted to start using some of the SDN functionality, so I went along with the tutorial on the wiki: https://pve.proxmox.com/wiki/Setup_Simple_Zone_With_SNAT_and_DHCP
Created a zone, vnet, subnet. Allowed DNS/DHCP though on the vnet. Started the LXC, it starts, gets an IP from IPAM-pve, it works.
Now where I keep having issues is outgoing connectivity from this container to the internet. Earlier i got it to work, a reboot of the host later it's gone. I don't see any line in why it worked, why it doesn't.
So I'm wondering - do I need to manually enable
ip_forward
and proxy_arp
on vmbr0
? Am I missing something somewhere?ICMP seems to always work.
code_language.shell:
# ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=4.51 ms
^C
--- 1.1.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 4.511/4.511/4.511/0.000 ms
root@proxy:~# ping cloudflare.com
PING cloudflare.com (104.16.132.229) 56(84) bytes of data.
64 bytes from 104.16.132.229 (104.16.132.229): icmp_seq=1 ttl=58 time=4.37 ms
64 bytes from 104.16.132.229 (104.16.132.229): icmp_seq=2 ttl=58 time=4.43 ms
^C
--- cloudflare.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 4.373/4.403/4.434/0.030 ms
root@proxy:~# curl -vv cloudflare.com
* Trying 104.16.132.229:80...
* Trying [2606:4700::6810:84e5]:80...
* Immediate connect fail for 2606:4700::6810:84e5: Network is unreachable
* Trying [2606:4700::6810:85e5]:80...
* Immediate connect fail for 2606:4700::6810:85e5: Network is unreachable
^C
root@proxy:~# curl -vv4 cloudflare.com
* Trying 104.16.132.229:80...
I am not using nftables.
This is my
/etc/network/interfaces
file (which doesn't seem to work). The DNAT rules are there to forward some traffic straight into an LXC - which works.
code_language.shell:
$ cat /etc/network/interfaces
auto lo
iface lo inet loopback
iface ens3 inet manual
auto vmbr0
iface vmbr0 inet static
address <my-ip>/22
gateway <my-gw>
bridge-ports ens3
bridge-stp off
bridge-fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up echo 1 > /proc/sys/net/ipv4/conf/vmbr0/proxy_arp
post-up iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 10.0.0.100
post-up iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to-destination 10.0.0.100
post-down iptables -t nat -D PREROUTING -p tcp --dport 80 -j DNAT --to-destination 10.0.0.100
post-down iptables -t nat -D PREROUTING -p tcp --dport 443 -j DNAT --to-destination 10.0.0.100
iface vmbr0 inet6 static
address <my-ipv6>/128
gateway fe80::1
source /etc/network/interfaces.d/*
My zone config:
code_language.shell:
$ cat /etc/pve/sdn/zones.cfg
simple: local
dhcp dnsmasq
ipam pve
$ cat /etc/pve/sdn/vnets.cfg
vnet: vnet0
zone local
$ cat /etc/pve/sdn/subnets.cfg
subnet: local-10.0.0.0-24
vnet vnet0
dhcp-range start-address=10.0.0.100,end-address=10.0.0.200
gateway 10.0.0.1
snat 1
My fw:
code_language.shell:
$ cat /etc/pve/firewall/cluster.fw
[OPTIONS]
enable: 1
[RULES]
IN DNS(ACCEPT) -i vnet0 -dest +sdn/vnet0-gateway -log nolog
IN DHCPfwd(ACCEPT) -i vnet0 -log nolog
IN ACCEPT -i wg0 -p icmp -log nolog -icmp-type any
IN ACCEPT -i wg0 -p tcp -dport 8006 -log nolog
IN SSH(ACCEPT) -i wg0 -log nolog
Last edited: