Question about Proxmox firewall

Status
Not open for further replies.

maverickws

Member
Jun 8, 2020
50
2
8
Hi all,

Got a couple of questions regarding Proxmox's firewall:
- Firewall is off by default, if I enable it, must I add rules for Proxmox's 8006 port or is there a built-in rule for that?
- Do I need to add extra ports for Proxmox's services?
- Private networks like corosync and ceph networks, need to be added?

What rules should I care for the most? Thanks
 
Hi @tom thanks for your answer.
One of the things that made me make this question is about not having a "macro" for Proxmox GUI nor SPICE, though SSH is there.

Also, something I didn't find clear in the docs, specially here:
13.8. Default firewall rules

Are the "management hosts" proxmox hosts?
 
Are the "management hosts" proxmox hosts?

management hosts are the hosts you use to connect to the PVE machines... you need to allow those IPs to access port 8006 so the GUI is accessible
 
Ok but from what I understand the proxmox api uses the default address provided by the host network stack to communicate among hosts.
That was a thorough explanation why when having 5 private networks I would still get timeouts on the GUI from one host to another.
The solution was to disable the network's stack preference for IPv6.

So, are there auto-added rules to ensure communication between the proxmox hosts in the cluster?

So far I created an IPSet with the IP's from the office (both IPv4 and IPv6), created a security group and added rules.
The rules created using a macro for SSH source ipset office, the second was source ipset office, destination empty (i'm assuming this would be a match all) source.port empty dest.port 8006 tcp.

Both failed and I got locked out of Proxmox, looking to recover access now which isn't easy as proxmox config is not accessible by using a rescue system.
 
Last edited:
Well got my access back.
So, is it mandatory to define the source & destination?

`pve-firewall localnet` returns the IPv6 address and IPv6 public network.

Accepting corosync traffic got 3 links with the corresponding private addresses assigned to each link. But that is only corosync traffic not API traffic.
What is defined as "local_network" (auto detected) is a public IPv6 address and obviously doesn't share the network as the other proxmox servers each have their public IPv6 on each network.
 
you can find the code that calculcates the local network here , ports 8806, 22, 5900-599 and 3128 are allowed by default . you can also set an alias called 'local_network' to override the calculated value, or define your own ipset and rules that mimic the auto-generated ones. pve-firewall localnet will show you whether a custom alias is in effect or not.
 
Hi @fabian thank you for your reply.

Giving me the code for calculating the local network doesn't really matter me much because I have mentioned this before, there are serious issues with proxmox's network stack implementation, mainly where its left to chance which will be the network/source address for proxmox api communication.
I have my proxmox connected to switches on a private network and all the Proxmox communication should go through there, or defined networks.

And my question now given your answer is:

If I create an IPSet called local_network, can I create it cluster wide and add both IPv4 and IPv6 of all the Proxmox hosts there?
local_network matters for what kinds of traffic exactly? API traffic? Because I'm assuming, and expecting that corosync traffic is going through the links defined as link0, link1 and link2 and Corosync traffic goes on ports 5404 and 5406. I did not see any mention to those on the allowed by default.

So if my links are link0, link1 and link2 for corosync traffic, and ports 5404 and 5406 are not open by default, will I have to add rules to allow the hosts in the cluster to talk to each other?

Code:
2587         ruleset_addrule($ruleset, $chain, "-d $localnet -p tcp --dport 8006", "-j $accept_action");  # PVE API
2588         ruleset_addrule($ruleset, $chain, "-d $localnet -p tcp --dport 22", "-j $accept_action"); # SSH

This fails miserably as described here: Connectivity lost between proxmox hosts
 
Last edited:
Hi @fabian thank you for your reply.

Giving me the code for calculating the local network doesn't really matter me much because I have mentioned this before, there are serious issues with proxmox's network stack implementation, mainly where its left to chance which will be the network/source address for proxmox api communication.
I have my proxmox connected to switches on a private network and all the Proxmox communication should go through there, or defined networks.

I don't see how the communication is 'left to chance'. pveproxy listens on all interfaces, and connects to other nodes via the IP address those nodes determined on pve-cluster startup by resolving their own hostname. unless your hostnames resolve randomly, this is pretty derministic.

corosync communication is completely independent from API communication, and will use those IPs/hostnames that you setup in corosync.conf

SSH/plain-text migration traffic will go over the default network used for API connections, unless you specify an override (either globally in /etc/pve/datacenter.cfg, or locally for a single command).

If I create an IPSet called local_network, can I create it cluster wide and add both IPv4 and IPv6 of all the Proxmox hosts there?
local_network matters for what kinds of traffic exactly? API traffic? Because I'm assuming, and expecting that corosync traffic is going through the links defined as link0, link1 and link2 and Corosync traffic goes on ports 5404 and 5406. I did not see any mention to those on the allowed by default.
local_network is only used for the intra-cluster API communication:
- pveproxy proxying from one node to another
- spiceproxy/vncproxy proxying from one node to another
- SSH tunneling from one node to another (e.g., for migration)
- plain-text migration tunnelling from one node to another

if you use a separate network for the latter two, you need to define your own rules there.

note that local_network is an alias, not an ipset. if you need an ipset, use a different name and define your own rules opening up the ports I mentioned.

So if my links are link0, link1 and link2 for corosync traffic, and ports 5404 and 5406 are not open by default, will I have to add rules to allow the hosts in the cluster to talk to each other?

Code:
2587         ruleset_addrule($ruleset, $chain, "-d $localnet -p tcp --dport 8006", "-j $accept_action");  # PVE API
2588         ruleset_addrule($ruleset, $chain, "-d $localnet -p tcp --dport 22", "-j $accept_action"); # SSH

This fails miserably as described here: Connectivity lost between proxmox hosts

for corosync, the firewall simply parses the corosync config and generates rules accordingly (again, 'pve-firewall localnet' shows you these rules).
 
Hello again fabian,

Let me quote old topics and interactions:

hmm not sure why it's picking up the ipv6 address.

temporarily commenting the ipv6 entry in /etc/hosts is also worth a shot (however i wasn't able to reproduce your problem)

if that doesn't change anything you can try to edit it manually to use the ipv4 after pasting the join info for now.

"I'm not sure why it's picking the IPv6 address."
"temporarily commenting"
"edit it manually"

Oh, and the preference of IPv6 comes from the RFC 3484 which defines the default order for geaddrinfo calls.

You could also edit /etc/gai.conf add (or uncomment) the line
precedence ::ffff:0:0/96 100

And restart pve-cluster afterwards, sorry for that confusion here, we mostly use getaddrinfo_all nowadays where this doesn't matters.

As you can see from these answers and related issues, getaddressinfo_all and taking preferences from RFC3484 (AS IS SUPPOSED!) is good for general-purpose apps not for an app like Proxmox that requires exquisite control between its member hosts. The proxmox team solutions is to hammer around, but that isn't a proper solution, not for me.
Actually starting on the fact that I can't simply connect my Proxmox cluster to our IDM servers in a simply and easy manner says a lot, and the only ldap implementation is unauthenticated bind, because ... lol. Unlike any other thing, being it gitlab, email servers, firewalls, openshift, the workstations, everything gets to be connected to servers that control users, roles, hosts, etc, but the proxmox genius that do all so well and say google uses debian (like Apple uses freebsd, right) so everything is all always perfectly implemented. (and apt is the best package manager out there, extremely advanced, I could live 1000 years I would never forget that one).

For the Join the IP from a single getaddrinfo call is used, as that is the one the system admin prefers for public destination address for this node, and can be managed using gai.conf
But, this is only a recommendation to have one preselected without forcing the admin to always enter a specific IP, most of the time that works out - as it's really only used for doing the API call to exchange the info required for join.

The last stance here is wrong. Is not only used for doing the api call to exchange the info required to join, but is used for many, many things.

What is described here, and I don't really have all the time in the world to go and pick contradictory information provided by the proxmox team members in this forum (that'd be a full time job), but if the address is picked from getaddrinfo calls then it is not defined by the admin and is a volatile implementation, a poor implementation, which can be seen by the number of issues that come around due to THIS in particular. My first issue was due to an upstream IPv6 issue, where it would try to use the IPv6 as a result of the getaddrinfo call, no fallback mechanism, no admin control over which network to use for API traffic between hosts.

In the end of the day, what stands is: the automatic rules don't work, the documentation is also poor, you can have a properly working cluster, you add an IPSet called management for management purpose as described is for remote access, when you enable the firewall, the Proxmox nodes can't communicate with each other across the cluster.

I also get it that some people who think apt is top of the art package management haven't used much of IPv6 (which explain why some wouldn't know how it picks an ipv6 address somewhere), and don't know how important it is to connect these hosts to central identity management, because proxmox is really good at one thing: KVM virtualisation. Lightweight, good performance. Other than that, most things don't work well. And what works is because technologies do work, or you made it to work (they are supposed to work when they're developed and put out). Other than that... is just things that mirror the people that said much of the above.

local_network is only used for the intra-cluster API communication:
- pveproxy proxying from one node to another
- spiceproxy/vncproxy proxying from one node to another
- SSH tunneling from one node to another (e.g., for migration)
- plain-text migration tunnelling from one node to another

out-of-the-box: does not work.

did not work for intra-cluster api communication,
did not work for proxying nothing from one node to another
did not work for ssh tunnelling from one to another

Code:
2586     if ($localnet && ($ipversion == $localnet_ver)) {
2587         ruleset_addrule($ruleset, $chain, "-d $localnet -p tcp --dport 8006", "-j $accept_action");  # PVE API
2588         ruleset_addrule($ruleset, $chain, "-d $localnet -p tcp --dport 22", "-j $accept_action");  # SSH
2589         ruleset_addrule($ruleset, $chain, "-d $localnet -p tcp --dport 5900:5999", "-j $accept_action");  # PVE VNC Console
2590         ruleset_addrule($ruleset, $chain, "-d $localnet -p tcp --dport 3128", "-j $accept_action");  # SPICE Proxy
2591     }

Code:
root@promox-01 ~ # iptables-save | grep 8006
-A PVEFW-HOST-IN -p tcp -m set --match-set PVEFW-0-management-v4 src -m tcp --dport 8006 -j RETURN
root@promox-01 ~ # iptables-save | grep 3128
-A PVEFW-HOST-IN -p tcp -m set --match-set PVEFW-0-management-v4 src -m tcp --dport 3128 -j RETURN
root@proxmox-01 ~# iptables-save | grep 22
53:-A INPUT -p tcp -m multiport --dports 22 -j f2b-sshd
73:-A PVEFW-DropBroadcast -d 224.0.0.0/4 -j DROP
91:-A PVEFW-HOST-IN -p tcp -m set --match-set PVEFW-0-management-v4 src -m tcp --dport 22 -j RETURN
130:-A PVEFW-reject -s 224.0.0.0/4 -j DROP
141:-A PVEFW-smurfs -s 224.0.0.0/4 -g PVEFW-smurflog

- management is described on the documentation as remote access hosts, not proxmox nodes among each other, since when I activate the firewall proxmox api comms break, seems a clear issue to me right here;
- v4 ? but you use getaddrinfo which looks for ai_family which can be af_inet or af_inet6.

Code:
root@proxmox-01 ~ # pve-firewall localnet
local hostname: proxmox-01
local IP address: 2a01:xxxx:xxxx:xxxx::1
network auto detect: 2a01:xxxx:xxxx:xxxx:0000:0000:0000:0000/64
using detected local_network: 2a01:xxxx:xxxx:xxxx:0000:0000:0000:0000/64

accepting corosync traffic from/to:
- proxmox-02: 10.xxx.xxx.2 (link: 0)
- proxmox-02: 10.xxx.xxx.2 (link: 1)
- proxmox-02: 10.xxx.xxx.2 (link: 2)

By the way, where are the Ceph rules?
 
Last edited:
Long post, with quite some wrong facts. I will not go through all, as most is out of the focus of this forum.

Please ask simple questions in short posts and the chance to get answers will be high.
 
I have asked, the provided answers are not correct.

When I enable Proxmox firewall the communication between the nodes is broken (API, HTTP/S port 8006) because getaddrinfo gets IPv6 addressing as can be seen on the output posted above and no rules ensure the communication among them.
Also, the documentation mentions "management" ipset as remote access for management purpose, not for communication of cluster nodes.

But please do correct me, and anyone who reads this, because knowledge should be shared, and you mentioned "will not go through all" well at least go through some. PLEASE PLEASE! :) I'm eager to learn. Thank you.
 
Last edited:
I have asked, the provided answers are not correct.

Based on my experience, the long term core devs knows their own code better than new forum members and I am pretty sure the answers are correct.

If something does not work for you like described, your network setup is probably a special one and different from default. Find it out.
 
You answer matched my expectation, a hand full of nothing, as I mentioned before:
"Other than that... is just things that mirror the people that said much of the above."

Next time, at least have some knowledge to share, pinpoint something and give a better insight to it.

“The difference between greatness and mediocrity is often how an individual views a mistake.”
- Nelson Boswell
 
Sorry, but this style of discussion is not wanted here, I am out here.
 
Status
Not open for further replies.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!