There is only one way to do this effectively, and that's to use Proxmox firewalling.
RPC is historically the least secure protocol ever crafted for computer systems. There are a disproportionately large volume of security problems with RPC itself and the services that use it. NO-ONE wants RPC anymore, but for reasons I never dug into because I am not a developer, it made networking easy for lots of applications. But it SUUUCCKS for security. It always has, and if the fact that modern implementations of rpcbind don't allow you to limit the IP address it binds to for TCP, this lack of regard for security in RPC is a trend that has not changed. People who work on RPC and its supported services simply do not care if your system gets hacked and it's their fault. Their attitude is clearly that if you use RPC and your system gets hacked, it's YOUR fault for using RPC in the first place.
That there are necessary services on Proxmox that use RPC is alarming to say the least. But it can be mitigated using the firewall capabilities. Simply do the following:
1. Enable firewalling at the datacenter level
2. Add a rule dropping or rejecting port 111 on whatever Proxmox host IP you need to block at the Proxmox host level.
It's that easy. It took me a bit to figure it out, but hopefully you're reading this and now you know how to do it. I actually blocked EVERYTHING coming into the one Proxmox IP that was bound to the "external" NIC, which you will understand if you read on. Easy peasy.
People above ask "Why do you need to block RPC if your host is behind a firewall?", and my first reaction is "Why are you talking? I have been doing cybersecurity as my primary business for 27 years, I have forgotten more about cybersecurity that you're likely to ever learn in a lifetime of study, and if I say RPC should be blocked or disabled by default, your response should be, hmm. I guess I never thought of that." Why do people with a teaspoon of knowledge always think they have the answer to everything?
I needed to block RPC because I needed to put my firewall on my Proxmox server suddenly. My fw hardware died, and I needed an immediate fix. To make this fix, I created a new VM, installed pfSense, restored the backed up config from the old firewall, passed through a second NIC to the VM, and adjusted the interface assignments. This meant that I didn't need to block RPC at the Proxmox host level. RPC was only active on the Proxmox host IP, which was still on the inside of my "new" firewall.
However, some weeks later, I accidentally kicked the latch for my removable bootable drive while doing some maintenance, and the Proxmox server wouldn't boot. It took a while to figure out why the boot drive was lost (I just had to shove it back in place), and in the meantime I had to use my dying FW hardware - which I discovered was working for the moment, but I didn't know for how long (overheating? maybe. I don't know and that's a long troubleshooting session to go without Internet when I also have a day job). I finally fixed my Proxmox host, but this Proxmox failure led me to decide that I needed a way to do HA for my firewall, so if I ever have a host problem on firewall, it will fail over and my Internet doesn't go down.
"Deploy two hardware firewalls in HA mode," you cry. Well, I could use pfSense HA on two small PCs. But there are two issues with that. One is that I wanted to use VM failover across Proxmox cluster members as a learning exercise. The second is that two hardware firewalls is EXPENSIVE. So I needed to use VMs for firewalls for now and perform VM failover on Proxmox, and that means having identical configs on two hosts. And since there is different hardware on those hosts, that meant using a Linux Bridge for an external NIC so the VMs would have the same NIC choices from the host system. And THAT meant Proxmox would have an IP of it's own on the external network. There's no other way around that, other than to have identical hardware in both hosts and ensure that passthrough is configured identically,
I could run two firewall VMs, one on each host, with passthrough of a NIC on each host for the firewall VM to use as an external NIC. That way, Proxmox isn't using and sharing the NIC as a bridge, and it won't be able to bind any services to it. But then that means going through the learning curve of deploying pfSense in HA mode. I'll tell you a secret - I actually did try that, and it was byzantine and confusing and I wanted fault tolerance more quickly, so back to VM automated failover in Proxmox. I may try this again at a later date when I have more time, patience, and desire to learn how to do it.
So, in my chosen HA method, RPC is sitting open on my Proxmox hosts, on the outside of the firewalls. Not good. So THAT's why I need a method to block RPC. (In point of fact, SSH was also open on the Proxmox servers' external interfaces, but guess what? SSH can be told to bind to specific NICs and addresses, so I configured it so it wasn't listening on that NIC anymore. rpcbind maintainers could perhaps learn a lesson there.)
The best way to solve my problem (the need for firewall failover) is to use good quality, dedicated hardware for my firewalls, and run them in HA mode. pfSense and OpenSense both support HA mode. But the best way is expensive, and not always doable. I, like many other people I would imagine, blew my extra cash on a second Proxmox server so I could a) have some newer hardware to play with, b) play with clustering, c) play with VM HA and automatic failover. My hardware firewall was a ByteNUC with 2 NICs, and they failed, and then the two USB/RJ45 replacements failed. I don't have the money for a pair of Qotom 3229whatevers to be fault tolerant firewalls (and I hear that their reliability is no screaming hell), and so I need a failover method that works on my Proxmox servers. And I will not bother to entertain critiques of my decisions by folks who just started their IT journey practically yesterday. RPC has been a security black hole since before I started my career, and we openly mocked it 25 years ago for how terrible it was then. The real question is, why does it still suck after all these years, and why are modern system integrators and application developers still using it?