Issue with outbound traffic on VM

MarkKnaap

New Member
Jan 2, 2019
2
0
1
63
Hi,

Being a Proxmox newbie, this may be a no-brainer to the forum members, but anyhow.

I have activated the datacenter FW in its default settings in DROP, out ACCEPT.

Next I have activated the FW on one of the VMs, with its default settings in DROP, out ACCEPT. Setup inbound rules, e.g. 3000 ACCEPT, 2055 ACCEPT (yes this is ntopng). Works fine for local network and other home network (I had set aliases with CIDR notation on the datacenter level, and using those to define inbound rules).

However, all outbound traffic seems to be blocked, e.g. cannot ping another machine on the subnet, apt-get upgrade gets nowhere, and DNS get stuck as ntop is not resolving IP on the web console.

Switching off the VM FW does not help, only switching off the FW on datacenter level does, but that's to be expected as then there no FW functionality at all.

Being naive I thought out ACCEPT would allow any outbound connection from the VM, with also the data center having an out ACCEPT setting.

Obviously I am doing something wrong here? To be honest I have not touched the cluster FW, but that does not seems to have a default in DROP / out ACCEPT rule, or should I?

Thx. Regs, Mark
 
I have activated the datacenter FW in its default settings in DROP, out ACCEPT.

Next I have activated the FW on one of the VMs, with its default settings in DROP, out ACCEPT. Setup inbound rules, e.g. 3000 ACCEPT, 2055 ACCEPT (yes this is ntopng). Works fine for local network and other home network (I had set aliases with CIDR notation on the datacenter level, and using those to define inbound rules).

However, all outbound traffic seems to be blocked, e.g. cannot ping another machine on the subnet, apt-get upgrade gets nowhere, and DNS get stuck as ntop is not resolving IP on the web console.

Analysis of firewall configuration files is requested as there are:
/etc/pve/firewall/*
 
There are two files in the /etc/pve/firewall directory: cluster.fw and 300.fw

This is the content of cluster.fw

[OPTIONS]

enable: 1

[ALIASES]

home_network 192.168.x.x/24 # Everything connected to home 'core' router subnet
local_network 192.168.x.x/24 # Local network where PVE is joined to.

[RULES]

IN ACCEPT -source home_network -p tcp -dport 8006

And this the content of 300.fw which is the VM running Debian and ntopng

[OPTIONS]

log_level_in: warning
log_level_out: warning
ipfilter: 1
policy_in: DROP
enable: 1

[ALIASES]

WRT54GL 192.168.x.x # DDWRT router sending rflow

[RULES]

IN ACCEPT -source wrt54gl -p udp -dport 2055 # Allow incoming netflow (udp) from local router to collection port 2055
IN ACCEPT -source local_network -p tcp -dport 3000 # Allow access local network to ntop web console at port 3000
IN ACCEPT -source home_network -p tcp -dport 3000 # Allow access home network to ntop web console at port 3000
|OUT Ping(ACCEPT)

As you can see I did not specify any OUT rules, assuming this would be taken care of by the default setting, which says OUT ACCEPT both on the DC FW as well as on the 300 FW.


Incoming traffic on the ports specified works. No outbound traffic like e.g. DNS queries, ping or apt-get update

I did try adding 'ping' but that failed, so it is inactive here.

Mark

 
I seem to be running into a similar issue. I'm no expert in this so it's entirely possible there are some misconfigurations in the rules.
I have a single host, with vms and containers inside natted by following the corresponding section on the Network Configuration page of the wiki.
The firewall is enabled on the Data Center, VM, and Network Device.
My cluster.fw reads as follows:
Code:
[OPTIONS]

enable: 1
policy_in: DROP

[IPSET extern]

172.16.2.0/24

[IPSET intern]

172.16.1.0/24

[IPSET testing]

172.16.3.0/24

[group databases] # allows connections to known database ports

IN ACCEPT -p tcp -dport 27017 # MongoDB
IN MySQL(ACCEPT)
IN PostgreSQL(ACCEPT)

[group external] # forbids connetions from internal/testing ipset

IN DROP -source +testing -p icmp
IN DROP -source +intern -p icmp
IN DROP -source +testing -p udp
IN DROP -source +intern -p udp
IN DROP -source +testing -p tcp
IN DROP -source +intern -p tcp

[group internal] # forbids connections from external/testing ipset

IN DROP -source +testing -p icmp
IN DROP -source +extern -p icmp
IN DROP -source +testing -p udp
IN DROP -source +extern -p udp
IN DROP -source +testing -p tcp
IN DROP -source +extern -p tcp

[group management] # allows connections to management ports

IN ACCEPT -p icmp
IN SSH(ACCEPT)

My 152.fw (one of the VMs) looks like this:
Code:
[OPTIONS]

enable: 1
log_level_in: debug
log_level_out: debug

[RULES]

GROUP internal
GROUP management

With this setup, VMs can ping each other but have no access to the internet. Even modifying the 152.fw like this does not help:
Code:
[OPTIONS]

policy_in: ACCEPT
enable: 1
log_level_in: warning
log_level_out: warning

[RULES]

OUT ACCEPT
IN ACCEPT

The only way to allow the VMs to access the internet is to disable the firewall on the network interface. Disabling it in Firewall -> Options has no effect.

In addition to this I get no firewall logging at all on the VM page.

The same holds true for all my LCX containers.

Edit:
I also don't think it behaved like that before my most recent upgrade. Here is my apt log of this:
Code:
Start-Date: 2019-01-25  14:15:35
Commandline: apt upgrade
Install: pve-kernel-4.15.18-10-pve:amd64 (4.15.18-32, automatic)
Upgrade: libapt-inst2.0:amd64 (1.4.8, 1.4.9), libpve-storage-perl:amd64 (5.0-33, 5.0-36), libsystemd0:amd64 (232-25+deb9u6, 232-25+deb9u8), apt:amd64 (1.4.8, 1.4.9), libarchive13:amd64 (3.2.2-2, 3.2.2-2+deb9u1), pve-ha-manager:amd64 (2.0-5, 2.0-6), pve-firewall:amd64 (3.0-16, 3.0-17), udev:amd64 (232-25+deb9u6, 232-25+deb9u8), pve-container:amd64 (2.0-31, 2.0-33), libapt-pkg5.0:amd64 (1.4.8, 1.4.9), pve-cluster:amd64 (5.0-31, 5.0-33), libudev1:amd64 (232-25+deb9u6, 232-25+deb9u8), librados2-perl:amd64 (1.0-5, 1.0-6), pve-xtermjs:amd64 (1.0-5, 3.10.1-1), pve-manager:amd64 (5.3-5, 5.3-8), libpve-guest-common-perl:amd64 (2.0-18, 2.0-19), systemd-sysv:amd64 (232-25+deb9u6, 232-25+deb9u8), libpam-systemd:amd64 (232-25+deb9u6, 232-25+deb9u8), lxc-pve:amd64 (3.0.2+pve1-5, 3.1.0-2), systemd:amd64 (232-25+deb9u6, 232-25+deb9u8), qemu-server:amd64 (5.0-43, 5.0-45), apt-utils:amd64 (1.4.8, 1.4.9), pve-kernel-4.15:amd64 (5.2-12, 5.3-1), apt-transport-https:amd64 (1.4.8, 1.4.9), libssl1.0.2:amd64 (1.0.2l-2+deb9u3, 1.0.2q-1~deb9u1), base-files:amd64 (9.9+deb9u6, 9.9+deb9u7), tzdata:amd64 (2018g-0+deb9u1, 2018i-0+deb9u1)
End-Date: 2019-01-25  14:17:56
 
Last edited:
I had a problem with specifically outbound traffic as well from a VM. Turned out at the VM level under firewall settings I had "IP Filtering" enabled. I disabled this and my egress traffic started again.

From the docs:

Standard IP set ipfilter-net*
These filters belong to a VM’s network interface and are mainly used to prevent IP spoofing. If such a set exists for an interface then any outgoing traffic with a source IP not matching its interface’s corresponding ipfilter set will be dropped.

For containers with configured IP addresses these sets, if they exist (or are activated via the general IP Filter option in the VM’s firewall’s options tab), implicitly contain the associated IP addresses.

For both virtual machines and containers they also implicitly contain the standard MAC-derived IPv6 link-local address in order to allow the neighbor discovery protocol to work.
 
Last edited:
I seem to be running into a similar issue. I'm no expert in this so it's entirely possible there are some misconfigurations in the rules.
I have a single host, with vms and containers inside natted by following the corresponding section on the Network Configuration page of the wiki.
The firewall is enabled on the Data Center, VM, and Network Device.
My cluster.fw reads as follows:
Code:
[OPTIONS]

enable: 1
policy_in: DROP

[IPSET extern]

172.16.2.0/24

[IPSET intern]

172.16.1.0/24

[IPSET testing]

172.16.3.0/24

[group databases] # allows connections to known database ports

IN ACCEPT -p tcp -dport 27017 # MongoDB
IN MySQL(ACCEPT)
IN PostgreSQL(ACCEPT)

[group external] # forbids connetions from internal/testing ipset

IN DROP -source +testing -p icmp
IN DROP -source +intern -p icmp
IN DROP -source +testing -p udp
IN DROP -source +intern -p udp
IN DROP -source +testing -p tcp
IN DROP -source +intern -p tcp

[group internal] # forbids connections from external/testing ipset

IN DROP -source +testing -p icmp
IN DROP -source +extern -p icmp
IN DROP -source +testing -p udp
IN DROP -source +extern -p udp
IN DROP -source +testing -p tcp
IN DROP -source +extern -p tcp

[group management] # allows connections to management ports

IN ACCEPT -p icmp
IN SSH(ACCEPT)

My 152.fw (one of the VMs) looks like this:
Code:
[OPTIONS]

enable: 1
log_level_in: debug
log_level_out: debug

[RULES]

GROUP internal
GROUP management

With this setup, VMs can ping each other but have no access to the internet. Even modifying the 152.fw like this does not help:
Code:
[OPTIONS]

policy_in: ACCEPT
enable: 1
log_level_in: warning
log_level_out: warning

[RULES]

OUT ACCEPT
IN ACCEPT

The only way to allow the VMs to access the internet is to disable the firewall on the network interface. Disabling it in Firewall -> Options has no effect.

In addition to this I get no firewall logging at all on the VM page.

The same holds true for all my LCX containers.

Edit:
I also don't think it behaved like that before my most recent upgrade. Here is my apt log of this:
Code:
Start-Date: 2019-01-25  14:15:35
Commandline: apt upgrade
Install: pve-kernel-4.15.18-10-pve:amd64 (4.15.18-32, automatic)
Upgrade: libapt-inst2.0:amd64 (1.4.8, 1.4.9), libpve-storage-perl:amd64 (5.0-33, 5.0-36), libsystemd0:amd64 (232-25+deb9u6, 232-25+deb9u8), apt:amd64 (1.4.8, 1.4.9), libarchive13:amd64 (3.2.2-2, 3.2.2-2+deb9u1), pve-ha-manager:amd64 (2.0-5, 2.0-6), pve-firewall:amd64 (3.0-16, 3.0-17), udev:amd64 (232-25+deb9u6, 232-25+deb9u8), pve-container:amd64 (2.0-31, 2.0-33), libapt-pkg5.0:amd64 (1.4.8, 1.4.9), pve-cluster:amd64 (5.0-31, 5.0-33), libudev1:amd64 (232-25+deb9u6, 232-25+deb9u8), librados2-perl:amd64 (1.0-5, 1.0-6), pve-xtermjs:amd64 (1.0-5, 3.10.1-1), pve-manager:amd64 (5.3-5, 5.3-8), libpve-guest-common-perl:amd64 (2.0-18, 2.0-19), systemd-sysv:amd64 (232-25+deb9u6, 232-25+deb9u8), libpam-systemd:amd64 (232-25+deb9u6, 232-25+deb9u8), lxc-pve:amd64 (3.0.2+pve1-5, 3.1.0-2), systemd:amd64 (232-25+deb9u6, 232-25+deb9u8), qemu-server:amd64 (5.0-43, 5.0-45), apt-utils:amd64 (1.4.8, 1.4.9), pve-kernel-4.15:amd64 (5.2-12, 5.3-1), apt-transport-https:amd64 (1.4.8, 1.4.9), libssl1.0.2:amd64 (1.0.2l-2+deb9u3, 1.0.2q-1~deb9u1), base-files:amd64 (9.9+deb9u6, 9.9+deb9u7), tzdata:amd64 (2018g-0+deb9u1, 2018i-0+deb9u1)
End-Date: 2019-01-25  14:17:56
Sorry for sort of necro-posting, but I have the very same issue. Did you manage to somehow "fix" this or come to any conclusions?
 
Sorry for sort of necro-posting, but I have the very same issue. Did you manage to somehow "fix" this or come to any conclusions?
One thing I just noted with PVE 6 (latest version, upgraded from PVE 5), on container creation, if "MAC address" is left unchanged to auto, it will generate a MAC when container is running, but when you activate firewall on that container, even if you define output rules, it won't have any connection to the outside world, only defined IN rules will work. Once you put a MAC address in the network settings page, it will work. I think it's a bug or something...
 
Last edited:
I can confirm the bug @LMC described in the post above, had the same issue. @Richard @dietmar this might sound like a trivial thing but it is not as I could not have use my FW on the latest version until I googled the result. Thanks
 
Confirming the same bug.

When MAC is set to Auto the rules in IPset do not work because MAC address is rotating every 15 seconds.
See output below, commands are run 5 seconds apart for a single VM on a server.
When you manually set MAC to a value, IPset rules start to work for outbound and inbound filters

Code:
root@p-9-it:~# ebtables -L | grep 50:
-s ! 50:5e:b7:57:bf:3d -j DROP
root@p-9-it:~# ebtables -L | grep 50:
-s ! 50:5e:b7:57:bf:3d -j DROP
root@p-9-it:~# ebtables -L | grep 50:
-s ! 50:5e:b7:57:bf:3d -j DROP
root@p-9-it:~# ebtables -L | grep 50:
-s ! 50:f1:a6:e2:44:18 -j DROP
root@p-9-it:~# ebtables -L | grep 50:
-s ! 50:f1:a6:e2:44:18 -j DROP
root@p-9-it:~# ebtables -L | grep 50:
-s ! 50:f1:a6:e2:44:18 -j DROP
root@p-9-it:~# ebtables -L | grep 50:
-s ! 50:f1:a6:e2:44:18 -j DROP
root@p-9-it:~# ebtables -L | grep 50:
-s ! 50:f1:a6:e2:44:18 -j DROP
root@p-9-it:~# ebtables -L | grep 50:
-s ! 50:f1:a6:e2:44:18 -j DROP
root@p-9-it:~# ebtables -L | grep 50:
-s ! 50:b6:39:6b:d4:d -j DROP
root@p-9-it:~# ebtables -L | grep 50:
-s ! 50:b6:39:6b:d4:d -j DROP

Code:
Bridge chain: veth2019i0-OUT, entries: 3, policy: ACCEPT
-s ! 50:62:31:f1:bf:6b -j DROP   <--- this MAC rotates
-p ARP -j veth2019i0-OUT-ARP
-j ACCEPT

Bridge chain: veth2019i0-OUT-ARP, entries: 3, policy: ACCEPT
-p ARP --arp-ip-src 176.*.*.* -j RETURN
-p ARP --arp-ip-src 176.*.*.* -j RETURN
-j DROP

Code:
# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.21-5-pve)
pve-manager: 6.0-12 (running version: 6.0-12/0a603350)
pve-kernel-helper: 6.0-12
pve-kernel-5.0: 6.0-11
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-3
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-7
libpve-guest-common-perl: 3.0-2
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.0-9
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-8
pve-cluster: 6.0-7
pve-container: 3.0-10
pve-docs: 6.0-8
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-4
pve-ha-manager: 3.0-3
pve-i18n: 2.0-3
pve-qemu-kvm: 4.0.1-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-13
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!