[SOLVED] using wifi instead of ethernet

Hello!

So you just run the command:
Bash:
ip a

It will show an output like this:
Bash:
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 00:0c:29:28:fd:4c brd ff:ff:ff:ff:ff:ff
    inet 192.168.50.2/24 brd 192.168.50.255 scope global eth0
    inet6 fe80::20c:29ff:fe28:fd4c/64 scope link
       valid_lft forever preferred_lft forever

The string right after the colon (you may have multiples interfaces, listed like 1: 2: etc) will be your identifier. Usually wireless is wlpxxx or wlanxxx, but the numbers at the end may vary based on your config.

Note: I just want to let everybody knows that the config i posted before does work and makes Proxmox accessible on wifi, but breaks the VMs connections to the bridge / internet. I still have to find some time to look into this to fix it.

Thanks for your prompt reply.

The problem is my NUC has a wifi network card but it is not seen from Proxmox.

To lspci command I get: 01:00.0 Network controller: Intel Corporation Wireless 7265 (rev 59)

But no active wifi network card is listed when I run the command ip a :
3: wlp1s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 74:d8:3e:74:69:04 brd ff:ff:ff:ff:ff:ff


The wlp1s0 is listed as down

I cannot understand if the built-in wifi card is the one labelled as wlp1s0 but it is not active or if there is another card not listed.

Understood you have not reached the end of the line accessing wifi from VMs, but I'll be a leech for your good results.
 
Thanks for your prompt reply.

The problem is my NUC has a wifi network card but it is not seen from Proxmox.

To lspci command I get: 01:00.0 Network controller: Intel Corporation Wireless 7265 (rev 59)

But no active wifi network card is listed when I run the command ip a :
3: wlp1s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 74:d8:3e:74:69:04 brd ff:ff:ff:ff:ff:ff


The wlp1s0 is listed as down

I cannot understand if the built-in wifi card is the one labelled as wlp1s0 but it is not active or if there is another card not listed.

Understood you have not reached the end of the line accessing wifi from VMs, but I'll be a leech for your good results.
I had a few NUCs and that's definitely it. You just have to configure it! Keeping in mind that your id is wlp1s0, you can follow this Debian WiKi: https://wiki.debian.org/WiFi/HowToUse#iwd and use IWD to configure it (scanning for networks, etc etc). It will then go UP and from there you can do all the fiddling i did to get to where I am still stuck. I think with some clever iptables the bridging will work, but again no time for now to try. :)
 
Hi, after a few attempts I managed to get working Proxmos on an old laptop for testing which only contains a wifi network device, so I thought I'll post the details of my setup.

Here is the /etc/network/interfaces configured on my proxmox server, check that you have internet connectivity first on the proxmox server, it'll require to install some packages like wpasupplicant

Code:
auto lo
iface lo inet loopback

# Wireless interface
allow-hotplug wlp1s0
iface wlp1s0 inet static
    address 192.168.3.101/24
    gateway 192.168.3.1
    wpa-ssid ***
    wpa-psk ***

# Virtual Bridge interface
auto vmbr0
iface vmbr0 inet static
    address 192.168.200.1/24 # IP assigned to virtual bridge on Proxmox server, gateway for VMs / Containers
    bridge-ports none
    bridge-stp off
    bridge-fd 0
    # Enable ip forwarding
    post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    # Route all traffic from VMs / Container through the wireless interface hiding its internal IP
    post-up iptables -t nat -A POSTROUTING -s '192.168.200.0/24' -o wlp1s0 -j MASQUERADE
    post-down iptables -t nat -D POSTROUTING -s '192.168.200.0/24' -o wlp1s0 -j MASQUERADE
    # I needed this otherwise my packets wouldn't reach other computers/router on my network
    post-up   iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
    post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1

The VMs or containers hosted by Proxmox use the 192.168.200.0/24 with a default gateway of 192.168.200.1 (IP assigned to Proxmox on vmbr0 interface) Here is how the routing table looks in a VM (ip route)

Code:
default via 192.168.200.1 dev eth0 proto static metric 100  #Virtual interface eth0 on VM uses vmbr0 as gateway
192.168.200.0/24 dev eth0 proto kernel scope link src 192.168.200.11 metric 100

The VMs can be exposed to the network by either creating a static route on your router or by using NAT and redirecting traffic from a port on the host to an internal VM / Container, if you don't have access to modify routes for the network

Creating a static route to expose your VMs/Containers:

Code:
sudo ip route add 192.168.200.0/24 via 192.168.3.101 dev wlo1 # Route added in other computer or router, to specify how to reach subnet 192.168.200.0

The other option with NAT can be used with iptables on the interfaces of the proxmox server, you could add these lines on the bridge config after the others:

Code:
post-up iptables -t nat -A PREROUTING -p tcp --dport 1234 -j DNAT --to-destination 192.168.200.11:1234 # Redirect port 1234 on proxmox server to guest VM 192.168.200.11 on the same prot

Hope you find it useful as searching for this issue always redirects me here and there was some confussion with the docs to get it working based on the history
 
  • Like
Reactions: Rebooth
The VMs can be exposed to the network by either creating a static route on your router or by using NAT and redirecting traffic from a port on the host to an internal VM / Container, if you don't have access to modify routes for the network
Thank you for this in depth explanation. I believe someone already mentioned something similar in passing earlier in this thread or another.
Was not the issue with this approach that it will not work when the laptop is connected to another network?, like in a hotel?
 
I think it will be unlikely to carry the laptop with proxmox anywhere plus your current laptop, if you install proxmox is to make a homelab or similar I'd imagine, in that case if I need from a VM/Container I will just use virt-manager.

But even if you want to do so, you can just change the wireless interface config to make it dhcp and the rest should stay the same. As it is using masquerade for the iptables rules so you only have to specify the interface, no need for static IPs as with SNAT.

Code:
# Wireless interface
allow-hotplug wlp1s0
iface wlp1s0 inet dhcp
    wpa-ssid ***
    wpa-psk ***
 
Last edited:
For anyone coming here wondering, as I was, why the guest is not getting a DHCP address...

When you do not bridge your WiFi device (due to aforementioned limitations and lack of general support for WDS/4addr), your internal VMs cannot reach your router's DHCP, and Proxmox does not run DHCP on `vmbr0`. You can either setup DHCP in `vmbr0` or just assign static IPs in each VM's config.

For the record I chose to do static IPs. My first VM gets `10.0.0.2` and so on. I then decided for consistency sake to set iptables rules to forward ports in the `2000` range to my first VM:

Code:
# Docker 10.0.0.2 gets ports 2000-2999
post-up iptables -t nat -A PREROUTING -p tcp --dport 2022 -j DNAT --to-destination 10.0.0.2:22
# ...and add more as needed...
# Clearly I don't need 1000 ports per host but it is a good consistent convention for me who has <10 VMs

Thank you @mingue for a working config!

Personally, I configured `iwd`:


Code:
systemctl enable iwd
iwctl # connect to your SSID


`/etc/iwd/main.conf` only contains:


Code:
[General]
EnableNetworkConfiguration=true

For the WiFi I use DHCP and assign a static IP at the DHCP server.


For paranoia, I `systemctl edit networking.service` to add:

Code:
[Unit]
After=iwd.service


It is probably unnecessary.

Lastly I did as Mingue suggested in `/etc/network/interfaces`:


Code:
auto lo
iface lo inet loopback

auto wlan0
iface wlan0 inet manual

# https://pve.proxmox.com/wiki/Network_Configuration#sysadmin_network_masquerading
# https://forum.proxmox.com/threads/using-wifi-instead-of-ethernet.56691/
auto vmbr0
iface vmbr0 inet static
    address 10.0.0.1/24
    bridge-ports none
    bridge-stp off
    bridge-fd 0
    # Enable ip forwarding
    post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    # Route all traffic from VMs / Container through the wireless interface hiding its internal IP
    post-up iptables -t nat -A POSTROUTING -s '10.0.0.0/24' -o wlan0 -j MASQUERADE
    post-down iptables -t nat -D POSTROUTING -s '10.0.0.0/24' -o wlan0 -j MASQUERADE
    # I needed this otherwise my packets wouldn't reach other computers/router on my network
    post-up   iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
    post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
 
Last edited:
For anyone coming here wondering, as I was, why the guest is not getting a DHCP address...

When you do not bridge your WiFi device (due to aforementioned limitations and lack of general support for WDS/4addr), your internal VMs cannot reach your router's DHCP, and Proxmox does not run DHCP on `vmbr0`. You can either setup DHCP in `vmbr0` or just assign static IPs in each VM's config.

For the record I chose to do static IPs. My first VM gets `10.0.0.2` and so on. I then decided for consistency sake to set iptables rules to forward ports in the `2000` range to my first VM:

Code:
# Docker 10.0.0.2 gets ports 2000-2999
post-up iptables -t nat -A PREROUTING -p tcp --dport 2022 -j DNAT --to-destination 10.0.0.2:22
# ...and add more as needed...
# Clearly I don't need 1000 ports per host but it is a good consistent convention for me who has <10 VMs

Thank you @mingue for a working config!

Personally, I configured `iwd`:


Code:
systemctl enable iwd
iwctl # connect to your SSID


`/etc/iwd/main.conf` only contains:


Code:
[General]
EnableNetworkConfiguration=true

For the WiFi I use DHCP and assign a static IP at the DHCP server.


For paranoia, I `systemctl edit networking.service` to add:

Code:
[Unit]
After=iwd.service


It is probably unnecessary.

Lastly I did as Mingue suggested in `/etc/network/interfaces`:


Code:
auto lo
iface lo inet loopback

auto wlan0
iface wlan0 inet manual

# https://pve.proxmox.com/wiki/Network_Configuration#sysadmin_network_masquerading
# https://forum.proxmox.com/threads/using-wifi-instead-of-ethernet.56691/
auto vmbr0
iface vmbr0 inet static
    address 10.0.0.1/24
    bridge-ports none
    bridge-stp off
    bridge-fd 0
    # Enable ip forwarding
    post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    # Route all traffic from VMs / Container through the wireless interface hiding its internal IP
    post-up iptables -t nat -A POSTROUTING -s '10.0.0.0/24' -o wlan0 -j MASQUERADE
    post-down iptables -t nat -D POSTROUTING -s '10.0.0.0/24' -o wlan0 -j MASQUERADE
    # I needed this otherwise my packets wouldn't reach other computers/router on my network
    post-up   iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
    post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
may we know what is your routing table in the proxmox and in the VMs?
 
may we know what is your routing table in the proxmox and in the VMs?
What specifics would you like me to share? Any commands I can run?

Other than above config I’ve done nothing special to the system. VMs are configured like normal. Sharing ports is done by adding pre-up and post-down lines in network interfaces config.

I only have 2 issues:
1. After a long time (a week?) I got a deauth from the router and it didn’t auto reconnect. Dunno why it got a deauth.
2. When I restart networking services (systemctl restart networking), the VMs become unreachable till I reboot them.
 
What specifics would you like me to share? Any commands I can run?

Other than above config I’ve done nothing special to the system. VMs are configured like normal. Sharing ports is done by adding pre-up and post-down lines in network interfaces config.

I only have 2 issues:
1. After a long time (a week?) I got a deauth from the router and it didn’t auto reconnect. Dunno why it got a deauth.
2. When I restart networking services (systemctl restart networking), the VMs become unreachable till I reboot them.
can you please share your IP route in both proxmox and in VM/s? I just want to have a reference and have better understanding on suppose routes.

I had a success in connecting my PC using wifi before but only once and I got messed up with my routing table on both my proxmox and VMs when I tried to clean up my IP routes.
 
Last edited:
can you please share your IP route in both proxmox and in VM/s? I just want to have a reference and have better understanding on suppose routes.

I had a success in connecting my PC using wifi before but only once and I got messed up with my routing table on both my proxmox and VMs when I tried to clean up my IP routes.
Here’s what I can think of to show. I didn’t do anything at all special in the VM:

Host:

Code:
 ⚡  ~  route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         pfSense.home    0.0.0.0         UG    304    0        0 wlan0
default         pfSense.home    0.0.0.0         UG    3004   0        0 wlan0
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 vmbr0
link-local      0.0.0.0         255.255.0.0     U     1008   0        0 fwpr100p0
link-local      0.0.0.0         255.255.0.0     U     1009   0        0 fwln100i0
link-local      0.0.0.0         255.255.0.0     U     1013   0        0 veth43f5655
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.22.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-33fee48aa92c
192.168.1.0     0.0.0.0         255.255.255.0   U     304    0        0 wlan0
192.168.1.0     0.0.0.0         255.255.255.0   U     3004   0        0 wlan0
 ⚡  ~  ifconfig
br-33fee48aa92c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.22.0.1  netmask 255.255.0.0  broadcast 172.22.255.255
        inet6 fe80::42:1bff:fe30:4023  prefixlen 64  scopeid 0x20<link>
        ether 02:42:1b:30:40:23  txqueuelen 0  (Ethernet)
        RX packets 608  bytes 49237 (48.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 630  bytes 78886 (77.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:bb:b2:23:a0  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp2s0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        ether 9c:6b:00:34:c4:e7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

fwbr100i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether be:0a:a4:22:0c:ce  txqueuelen 1000  (Ethernet)
        RX packets 6858  bytes 2210488 (2.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

fwln100i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 169.254.120.24  netmask 255.255.0.0  broadcast 169.254.255.255
        ether be:0a:a4:22:0c:ce  txqueuelen 1000  (Ethernet)
        RX packets 51616  bytes 408853190 (389.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 48147  bytes 11685784 (11.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

fwpr100p0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 169.254.44.149  netmask 255.255.0.0  broadcast 169.254.255.255
        ether ee:a0:53:bb:7e:d2  txqueuelen 1000  (Ethernet)
        RX packets 48147  bytes 11685784 (11.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 51616  bytes 408853190 (389.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 5156  bytes 291305 (284.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5156  bytes 291305 (284.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tap100i0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500
        ether d2:ca:d2:de:80:56  txqueuelen 1000  (Ethernet)
        RX packets 41280  bytes 9382870 (8.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 51616  bytes 408853190 (389.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth43f5655: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 169.254.206.166  netmask 255.255.0.0  broadcast 169.254.255.255
        inet6 fe80::cca1:e0ff:fe84:353  prefixlen 64  scopeid 0x20<link>
        ether ce:a1:e0:84:03:53  txqueuelen 0  (Ethernet)
        RX packets 608  bytes 57749 (56.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7363  bytes 2376808 (2.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vmbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::747c:13ff:fe7a:1eef  prefixlen 64  scopeid 0x20<link>
        ether ee:a0:53:bb:7e:d2  txqueuelen 1000  (Ethernet)
        RX packets 48147  bytes 11011726 (10.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 44897  bytes 406557104 (387.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.42  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fdf8:9fff:3b82:ec45:e2c2:64ff:feb2:b2f4  prefixlen 128  scopeid 0x0<global>
        inet6 fdf8:9fff:3b82:ec45:82fa:def7:49c7:3f96  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::e2c2:64ff:feb2:b2f4  prefixlen 64  scopeid 0x20<link>
        ether e0:c2:64:b2:b2:f4  txqueuelen 1000  (Ethernet)
        RX packets 51959973  bytes 76727893192 (71.4 GiB)
        RX errors 0  dropped 4  overruns 0  frame 0
        TX packets 3471777  bytes 4730125991 (4.4 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 ⚡  ~  brctl show
bridge name    bridge id        STP enabled    interfaces
br-33fee48aa92c        8000.02421b304023    no        veth43f5655
docker0        8000.0242bbb223a0    no
fwbr100i0        8000.be0aa4220cce    no        fwln100i0
                            tap100i0
vmbr0        8000.eea053bb7ed2    no        fwpr100p0
 ⚡  ~  iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
DNAT       tcp  --  anywhere             anywhere             tcp dpt:2022 to:10.0.0.2:22
DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  anywhere            !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  172.17.0.0/16        anywhere
MASQUERADE  all  --  172.22.0.0/16        anywhere
MASQUERADE  all  --  10.0.0.0/24          anywhere
MASQUERADE  tcp  --  172.22.0.2           172.22.0.2           tcp dpt:51821
MASQUERADE  udp  --  172.22.0.2           172.22.0.2           udp dpt:51820

Chain DOCKER (2 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere
DNAT       tcp  --  anywhere             anywhere             tcp dpt:51821 to:172.22.0.2:51821
DNAT       udp  --  anywhere             anywhere             udp dpt:51820 to:172.22.0.2:51820
 ⚡  ~  iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
DOCKER-USER  all  --  anywhere             anywhere
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain DOCKER (2 references)
target     prot opt source               destination
ACCEPT     tcp  --  anywhere             172.22.0.2           tcp dpt:51821
ACCEPT     udp  --  anywhere             172.22.0.2           udp dpt:51820

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere

Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere
DROP       all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere

Chain DOCKER-USER (1 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere

In a VM (that runs Docker):

Code:
 docker  ~  route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         _gateway        0.0.0.0         UG    0      0        0 ens18
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 ens18
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
 docker  ~  ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:fa:f8:82:c1  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens18: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.2  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::be24:11ff:fe80:54ac  prefixlen 64  scopeid 0x20<link>
        ether bc:24:11:80:54:ac  txqueuelen 1000  (Ethernet)
        RX packets 51710  bytes 408865009 (389.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 41362  bytes 9395701 (8.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 233908  bytes 14008204 (13.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 233908  bytes 14008204 (13.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 ⚡  ~  iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
nixos-fw   all  --  anywhere             anywhere

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
DOCKER-USER  all  --  anywhere             anywhere
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain DOCKER (1 references)
target     prot opt source               destination

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere

Chain DOCKER-USER (1 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere

Chain nixos-fw (1 references)
target     prot opt source               destination
nixos-fw-accept  all  --  anywhere             anywhere
nixos-fw-accept  all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
nixos-fw-accept  tcp  --  anywhere             anywhere             tcp dpt:ssh
nixos-fw-accept  udp  --  anywhere             anywhere             udp dpt:snmp
nixos-fw-accept  icmp --  anywhere             anywhere             icmp echo-request
nixos-fw-log-refuse  all  --  anywhere             anywhere

Chain nixos-fw-accept (5 references)
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere

Chain nixos-fw-log-refuse (1 references)
target     prot opt source               destination
LOG        tcp  --  anywhere             anywhere             tcp flags:FIN,SYN,RST,ACK/SYN LOG level info prefix "refused connection: "
nixos-fw-refuse  all  --  anywhere             anywhere             PKTTYPE != unicast
nixos-fw-refuse  all  --  anywhere             anywhere

Chain nixos-fw-refuse (2 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere
 ⚡  ~  iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  anywhere            !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  172.17.0.0/16        anywhere

Chain DOCKER (2 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere
 
  • Like
Reactions: Rebooth
@Clete2 my setup started working after couple of reboots before I read this. still, highly appreciated mann. it's a great reference.

Thanks to you all. esp. @mingue
It worked!


I'm running a truenas scale in Proxmox. it gave me a challenge when I first setup this wifi.
 
  • Like
Reactions: Clete2

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!