HAProxy Firewall Bandwidth Limitation in PVE High Availability Cluster for Remote Desktop Protocol

marvolo_gaunt

New Member
Sep 13, 2025
2
0
1
Hello Proxmox Community,

I hope you’re all doing well.

We’ve set up a High Availability (HA) Proxmox cluster with 10 nodes (and growing), and we’re planning to host 1000+ Windows 11 VMs within this cluster. Each VM is dedicated to a single user, who connects via RDP. On average, each RDP session consumes about 100 Mbps of bandwidth.

To manage external access, we’ve placed a pfSense HAProxy setup in front of the cluster. It consists of:
- A CARP master VM with a Hetzner failover IP
- A CARP backup VM that takes over if the master becomes unreachable

At any given time, only one pfSense VM is active. Each Proxmox node also has its own public IP.

Users connect to their VMs using a dedicated FQDN and RDP port, for example:

vm{proxmox_vm_id}.my-domain.com:<rdp_port>

e.g. vm123.my-domain.com:456 for VM ID 123 with RDP port 456. This mapping remains consistent even when a VM migrates between nodes.

Please note that it is neccessary for us to have the vmid in the FQDN as we are also using another service with hardcoded port value for WinRM: 5985/5986.


The Challenge

When a large number of users connect simultaneously, the pfSense VM becomes a bottleneck. Currently, it is limited to 1 Gbps network throughput, which isn’t sufficient for scaling beyond a certain point.


The Question

How can we design the networking layer so that pfSense doesn’t become a bottleneck when supporting 1000+ concurrent RDP connections?

We’ve explored the idea of SNAT-based routing — inbound traffic entering via pfSense, with outbound traffic going directly through the Proxmox node’s public IP. However, this introduces complications when VMs are migrated between nodes (since the public IP may change), and we are not sure how to properly implement this.

We’d really appreciate any insights or design recommendations from those who have dealt with similar high-scale setups.

Thank you in advance for your guidance!

Best regards
 
You need multiqueue support. Last time I checked, pfsense didn't support it (maybe it does now ?). Anyway, you can try OPNSense which does support it, and can easily reach 5Gbps (probably more, depending on CPU and some tuning). Still won't be enough for 1000 RDP connections at 100Mbps each (but you'll never be able to reach this with VM based firewalls)
 
Last edited:
You need multiqueue support. Last time I checked, pfsense didn't support it (maybe it does now ?). Anyway, you can try OPNSense which does support it, and can easily reach 5Gbps (probably more, depending on CPU and some tuning). Still won't be enough for 1000 RDP connections at 100Mbps each (but you'll never be able to reach this with VM based firewalls)
Hi Daniel,

thank you for getting back to me.

It is necessary for us to use pfSense as that is standard in our organization.

I'm also open to considering having multiple pfSense dedicated servers. The limitation that we are encountering is that Hetzner only seems to have either 1 gbps or 10 gbps for dedicated servers. Would it somehow be possible to utilize the pfSense IP for incoming RDP connections and/or IP assignments and use the public IPs of the Proxmox nodes for outgoing traffic? Does that work with RDP? I'm worried that RDP clients will not accept outgoing traffic coming from a different IP.

Kind regards