rpcbind

Thomas Jagoditsch

Renowned Member
Jan 22, 2016
10
2
68
59
in one of our (lazy infrequent) security scans we stumbled upon a running rpcbind. it seems that it was installed around 8.0.4.

trying to remove it tells us that pve depends on it:
Code:
The following packages will be REMOVED:
  libpve-guest-common-perl* libpve-storage-perl* nfs-common* proxmox-ve* pve-container* pve-ha-manager* pve-manager* qemu-server* rpcbind*

what is the use case for rpc in this context ?
can we block it via iptables and/or hosts.allow ?
do we need access to rpc from other nodes in the cluster ?

tia,tja...
 
  • Like
Reactions: justinclift
I don't know about all the other dependencies, but nfs requires it and I think some of the features of proxmox is supporting NFS.
If no one knows the answer, and not using nfs, it might be safe to block if this seems like a new requirement.
You could set iptables to ALLOW and do a rolling reboot the nodes (portmaps often only hit at startup), and do various operations and then check the count to see if the rule is hit.

Either way, you should be able to limit it with iptables so only the proxmox nodes (and any nfs servers or clients) have access.
 
@Thomas Jagoditsch How did things go with this?

Asking because I'm working out the process for hardening an internet facing 2 node cluster, and rpcbind looks to be bound by default to udp:111 on both of our nodes as well.

We've stopped and disabled the service for now (ie: "systemctl disable rpcbind"), and nothing has appeared to be immediately upset with that. But we'll see how it goes over the coming days of getting the hardening process worked out.
 
@Thomas Jagoditsch How did things go with this?

Asking because I'm working out the process for hardening an internet facing 2 node cluster, and rpcbind looks to be bound by default to udp:111 on both of our nodes as well.

We've stopped and disabled the service for now (ie: "systemctl disable rpcbind"), and nothing has appeared to be immediately upset with that. But we'll see how it goes over the coming days of getting the hardening process worked out.
hi justinclift.

we basically restricted rpc to localhost. we did this via adding

/etc/systemd/system/rpcbind.socket.d/override.conf:
Code:
[Socket]
ListenStream=
ListenDatagram=
ListenStream=127.0.0.1:111
ListenDatagram=127.0.0.1:111
ListenStream=[::1]:111
ListenDatagram=[::1]:111
i cant remember why we did it this way but my guess is that other methods (changing the rpc config files or changing the service file ...) either failed us or we wanted to be sure that subsequent updates would'nt mess with our changes.
this is in production use on our cluster since the beginning of march and we had no troubles so far.

wbr,tja...
 
  • Like
Reactions: justinclift
Thanks heaps, that's good info. In poking around the Proxmox forums further it looks like rpcbind is used solely by NFS storage, so for now I've masked the service to ensure it can't even start. If it turns out to be needed, then binding it to localhost (or the internal cluster network interface) will probably be the right direction to go. :)
 
Last edited:
If you do not use it (no NFS),just disable rpcbind:
Code:
systemctl disable --now rpcbind.service rpcbind.socket
 
Thanks @ubu. For now I'm masking it on these test boxes. Doesn't sound like that'll need to change, as we're not using NFS.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!