How to close open port 111

wwweiss

Well-Known Member
Apr 28, 2018
31
6
48
64
I am completely new to proxmox mailgateway. Installation went like a charm, everything ist up and running on a virtual machine inside hyper-v. I am just playing around not yet really using the system.
Only two days after my installation I got some complaints form reports.cert-bund.de that this system has open port 111 (upd).
Ist it necessary to have this port open? My system is not behind a firewall, so I would like to have only necessary ports open.
Can anybody help me how to close this port without loosing needed functions?
 
Thanks for this info.
Just for some other newbies: I disabled rpc with
# service rpcbind stop
# systemctl disable rpcbind

I would suggest to disable this by default, when using the proxmox iso.
 
Got the same report just a few days after setting up the system. For sure, it's possible to disable services, which are not required, however, maybe it's better to have a firewall on the system itself (however RPC seems to be the only service not bound to localhost, which is not required but enabled/started). I used

# apt-get install ufw
# ufw enable
# ufw default deny incoming
# ufw default allow outgoing
# ufw allow ssh
# ufw allow smtp
# ufw allow 8006

Much better then was, I set up OpenVPN lateron and also closed ports 8006 and ssh from the whole internet and limited them to my VPN connection.
 
Can one of two changes be included in a future PMG release?

  • Disable rpcbind by default
  • Add a configuration knob to "Administration > Services" in the GUI/API so that it can be controlled without needing to make CLI-based changes

I wanted to surface this old thread. I noticed in a review of the documentation compared to the reality on the PMG server.

Here are the firewall settings from the documentation:
https://pmg.proxmox.com/pmg-docs/pmg-admin-guide.html#firewall_settings

Here is the output of sudo ss -tulwn | grep LISTEN:
Code:
tcp   LISTEN 0      4096         0.0.0.0:111        0.0.0.0:*         
tcp   LISTEN 0      100          0.0.0.0:25         0.0.0.0:*         
tcp   LISTEN 0      100          0.0.0.0:26         0.0.0.0:*         
tcp   LISTEN 0      128          0.0.0.0:22         0.0.0.0:*         
tcp   LISTEN 0      4096       127.0.0.1:10023      0.0.0.0:*         
tcp   LISTEN 0      4096       127.0.0.1:10022      0.0.0.0:*         
tcp   LISTEN 0      100        127.0.0.1:10025      0.0.0.0:*         
tcp   LISTEN 0      4096       127.0.0.1:10024      0.0.0.0:*         
tcp   LISTEN 0      244        127.0.0.1:5432       0.0.0.0:*         
tcp   LISTEN 0      4096       127.0.0.1:85         0.0.0.0:*         
tcp   LISTEN 0      4096            [::]:111           [::]:*         
tcp   LISTEN 0      100             [::]:25            [::]:*         
tcp   LISTEN 0      100             [::]:26            [::]:*         
tcp   LISTEN 0      128             [::]:22            [::]:*         
tcp   LISTEN 0      4096               *:8006             *:*

Here is the specific output of sudo lsof -i -P -n | grep LISTEN | grep :111
Code:
systemd       1     root   36u  IPv4  18453      0t0  TCP *:111 (LISTEN)
systemd       1     root   38u  IPv6    228      0t0  TCP *:111 (LISTEN)
rpcbind     509     _rpc    4u  IPv4  18453      0t0  TCP *:111 (LISTEN)
rpcbind     509     _rpc    6u  IPv6    228      0t0  TCP *:111 (LISTEN)
 
I wanted to surface this old thread. I noticed in a review of the documentation compared to the reality on the PMG server.

Here are the firewall settings from the documentation:
https://pmg.proxmox.com/pmg-docs/pmg-admin-guide.html#firewall_settings
The settings describe which ports need to be for PMG to work (see also the 'from', 'to' lines)

the only thing that might warrant mentioning is explicitly stating that ssh (tcp/22) from an admin-workstation might be a good idea.

else, apart from 111, all other ports with public listeners are shown there as well.

regarding port 111 - it should work to just remove `rpcbind, nfs-common` if you don't need it
We might consider doing so in a future version, but since it's expected to deploy PMG behind a firewall (or configure iptables/nft on it) it's not really high priority

I hope this explains it!
 
Understood. I am using PMG in a VM on PVE, so I noticed that there is a setting in PVE at:
Datacenter > {node} > {vm} > Firewall

I think I will take a look at enabling and configuring that to match the needs for PMG in the documentation.

Is there a quickstart that I can look at for configuring that PVE feature correctly for PMG? Also, am I correct in thinking that this is an appropriate firewall to deploy PMG behind?
 
Also, am I correct in thinking that this is an appropriate firewall to deploy PMG behind?
should work fine you can use nmap to test that it works as expected (from an outside workstation ;) - https://pmg.proxmox.com/pmg-docs/pmg-admin-guide.html#nmap

Is there a quickstart that I can look at for configuring that PVE feature correctly for PMG?
No - not yet - potentially a page in the wiki for this might be warranted though - if you like please open an enhancement request (for pmg->Documentation) over at https://bugzilla.proxmox.com
 
Thanks for the help. Two more questions:

1) I was able to disable the service on port 111 using the following commands (slightly modified from an earlier post to only use the systemd commands). You mentioned "just remove `rpcbind, nfs-common` if you don't need it". You mean the packages, correct? I ran apt purge rpcbind and this zapped both packages.
Bash:
systemctl stop rpcbind
systemctl disable rpcbind

2) I'd be happy to write a tutorial for configuring PVE firewall for PMG. Should I post a draft set of changes as a tutorial thread here on the forums for editing and feedback or just put it all on Bugzilla, or something else? I may get some aspect of it incorrect on the first try.
 
  • Like
Reactions: Stoiko Ivanov
By disabling the rpcbind ... wasn't it required for the nfs hardrive mapping ?
And if turn off, was it just to auto discover .. as to have an nfs for datastore.. if it's set in manuel.. will it be ok or it will just not be mounted when pve boot ?

/etc/pve/storage.cfg
nfs: nfs
export /sda3/extradrive
path /mnt/pve/nfs
server 192.22.45.5
options vers=4.1
content images, dump
nodes pve-test
 
Thanks for the write up, i will try out later today.. Also, not sure, but for a spice connection that do connect via a key, was it direct straight or it need a firewall rule as well for that ?
 
I have no idea about spice. I don't see why someone would want to use spice in conjunction with a linux console without a GUI. The GUI control is through port 8006 for PMG.
 
Sorry to revive old thread, but seems quite relevant (EDIT: oh but actually my issue is in Proxmox VE, whoops noticed that too late)

I have set up proxmox and I am running firewall enabled inside proxmox on Datacenter level
Node level fw is disabled
And then I have firewall enabled on each VM (they have their own network interface/ip)

However in datacenter firewall I only allow in
icmp
ipv6-icmp
ssh (custom port)
proxmox web 8006
then DROP everything on any interface

I also got the email that my 111 is open (on the proxmox host)

Why is it open when I only allow these above ports/protocols inside the datacenter firewall in proxmox?

Even if I specifically drop 111 above all rules, its still open.

Very surprised.
(maybe something I am not understanding?)

What is the proper way to block the port with proxmox firewall, ideally inside web ui?
I would rather not have custom iptables rules
Or rather I need to find out what is actually going on

And I am trying to understand why isnt everything blocked
 
Last edited:
Guessing I must be misunderstanding how the datacenter and node level work

I also realized that if I disable allowing my ssh in datacenter level
It has no effect, can still connect to ssh

I set up the same rules in node level and enabled node firewall
Then I disabled allowing SSH on the node level...it was still possible to connect to ssh
Then I disabled allowing SSH on datacenter level and now its finally blocked
Then I enabled SSH only on node level and now its possible to connect

I am guessing my rules didnt apply because the node level firewall was disabled
(I thought I only need that if I want to set up some rules for specific node if I have more nodes)

Now the 111 is showing as open|filtered (previously just open)
So I think thats fine now
 
Last edited:
There is only one way to do this effectively, and that's to use Proxmox firewalling.

RPC is historically the least secure protocol ever crafted for computer systems. There are a disproportionately large volume of security problems with RPC itself and the services that use it. NO-ONE wants RPC anymore, but for reasons I never dug into because I am not a developer, it made networking easy for lots of applications. But it SUUUCCKS for security. It always has, and if the fact that modern implementations of rpcbind don't allow you to limit the IP address it binds to for TCP, this lack of regard for security in RPC is a trend that has not changed. People who work on RPC and its supported services simply do not care if your system gets hacked and it's their fault. Their attitude is clearly that if you use RPC and your system gets hacked, it's YOUR fault for using RPC in the first place.

That there are necessary services on Proxmox that use RPC is alarming to say the least. But it can be mitigated using the firewall capabilities. Simply do the following:

1. Enable firewalling at the datacenter level
2. Add a rule dropping or rejecting port 111 on whatever Proxmox host IP you need to block at the Proxmox host level.

It's that easy. It took me a bit to figure it out, but hopefully you're reading this and now you know how to do it. I actually blocked EVERYTHING coming into the one Proxmox IP that was bound to the "external" NIC, which you will understand if you read on. Easy peasy.

People above ask "Why do you need to block RPC if your host is behind a firewall?", and my first reaction is "Why are you talking? I have been doing cybersecurity as my primary business for 27 years, I have forgotten more about cybersecurity that you're likely to ever learn in a lifetime of study, and if I say RPC should be blocked or disabled by default, your response should be, hmm. I guess I never thought of that." Why do people with a teaspoon of knowledge always think they have the answer to everything?

I needed to block RPC because I needed to put my firewall on my Proxmox server suddenly. My fw hardware died, and I needed an immediate fix. To make this fix, I created a new VM, installed pfSense, restored the backed up config from the old firewall, passed through a second NIC to the VM, and adjusted the interface assignments. This meant that I didn't need to block RPC at the Proxmox host level. RPC was only active on the Proxmox host IP, which was still on the inside of my "new" firewall.

However, some weeks later, I accidentally kicked the latch for my removable bootable drive while doing some maintenance, and the Proxmox server wouldn't boot. It took a while to figure out why the boot drive was lost (I just had to shove it back in place), and in the meantime I had to use my dying FW hardware - which I discovered was working for the moment, but I didn't know for how long (overheating? maybe. I don't know and that's a long troubleshooting session to go without Internet when I also have a day job). I finally fixed my Proxmox host, but this Proxmox failure led me to decide that I needed a way to do HA for my firewall, so if I ever have a host problem on firewall, it will fail over and my Internet doesn't go down.

"Deploy two hardware firewalls in HA mode," you cry. Well, I could use pfSense HA on two small PCs. But there are two issues with that. One is that I wanted to use VM failover across Proxmox cluster members as a learning exercise. The second is that two hardware firewalls is EXPENSIVE. So I needed to use VMs for firewalls for now and perform VM failover on Proxmox, and that means having identical configs on two hosts. And since there is different hardware on those hosts, that meant using a Linux Bridge for an external NIC so the VMs would have the same NIC choices from the host system. And THAT meant Proxmox would have an IP of it's own on the external network. There's no other way around that, other than to have identical hardware in both hosts and ensure that passthrough is configured identically,

I could run two firewall VMs, one on each host, with passthrough of a NIC on each host for the firewall VM to use as an external NIC. That way, Proxmox isn't using and sharing the NIC as a bridge, and it won't be able to bind any services to it. But then that means going through the learning curve of deploying pfSense in HA mode. I'll tell you a secret - I actually did try that, and it was byzantine and confusing and I wanted fault tolerance more quickly, so back to VM automated failover in Proxmox. I may try this again at a later date when I have more time, patience, and desire to learn how to do it.

So, in my chosen HA method, RPC is sitting open on my Proxmox hosts, on the outside of the firewalls. Not good. So THAT's why I need a method to block RPC. (In point of fact, SSH was also open on the Proxmox servers' external interfaces, but guess what? SSH can be told to bind to specific NICs and addresses, so I configured it so it wasn't listening on that NIC anymore. rpcbind maintainers could perhaps learn a lesson there.)

The best way to solve my problem (the need for firewall failover) is to use good quality, dedicated hardware for my firewalls, and run them in HA mode. pfSense and OpenSense both support HA mode. But the best way is expensive, and not always doable. I, like many other people I would imagine, blew my extra cash on a second Proxmox server so I could a) have some newer hardware to play with, b) play with clustering, c) play with VM HA and automatic failover. My hardware firewall was a ByteNUC with 2 NICs, and they failed, and then the two USB/RJ45 replacements failed. I don't have the money for a pair of Qotom 3229whatevers to be fault tolerant firewalls (and I hear that their reliability is no screaming hell), and so I need a failover method that works on my Proxmox servers. And I will not bother to entertain critiques of my decisions by folks who just started their IT journey practically yesterday. RPC has been a security black hole since before I started my career, and we openly mocked it 25 years ago for how terrible it was then. The real question is, why does it still suck after all these years, and why are modern system integrators and application developers still using it?
 
Last edited: