I'm running Proxmox 8.2.4 on a dual Xeon 4309Y (128GB RAM/1 TB SSD) system and experiencing extremely bad web performance on my first container when I access that container externally. The speed is great when I access the container directly from the Proxmox host, so I suspect something is wrong in my network bridging.
I configured a Debian 12 LXC container with 12 cores/32GB RAM/64GB SSD that performs as expected -- just 50 points off of a native per core speed on Geekbench 6. It has two virtual network cards connected to a local bridge vmbr0 (10.0.0.2) and one that connects to vmbr1 (static IP xx.xx.xx.64). The container has a static IP of xx.xx.xx.65 with the .64 bridge's IP as its gateway. The datacenter routed a /29 range statically to the Proxmox host's primary IP and then I sort things out with those IPs within my network configuration.
Ookla Speedtest's has essentially no sign of overhead going between the host and the container and Proxmox doesn't show the system under any strain at all, however compared to a similarly configured bare metal NGINX server in the same datacenter, I'm seeing a 7x-10x slowdown connecting to the container. While `ab -n 10000 -c 100 -k -H "Accept-Encoding: gzip, deflate" -H` can complete in about 10 seconds with 999 requests per second on the bare metal server, it takes 11 minutes and can only do 14 requests per second connecting to the container. (The total time per request is actually faster on the container, it just can't seem to sustain a load.)
Here's my host interfaces configuration:
Here's the container's network configuration:
I configured a Debian 12 LXC container with 12 cores/32GB RAM/64GB SSD that performs as expected -- just 50 points off of a native per core speed on Geekbench 6. It has two virtual network cards connected to a local bridge vmbr0 (10.0.0.2) and one that connects to vmbr1 (static IP xx.xx.xx.64). The container has a static IP of xx.xx.xx.65 with the .64 bridge's IP as its gateway. The datacenter routed a /29 range statically to the Proxmox host's primary IP and then I sort things out with those IPs within my network configuration.
Ookla Speedtest's has essentially no sign of overhead going between the host and the container and Proxmox doesn't show the system under any strain at all, however compared to a similarly configured bare metal NGINX server in the same datacenter, I'm seeing a 7x-10x slowdown connecting to the container. While `ab -n 10000 -c 100 -k -H "Accept-Encoding: gzip, deflate" -H` can complete in about 10 seconds with 999 requests per second on the bare metal server, it takes 11 minutes and can only do 14 requests per second connecting to the container. (The total time per request is actually faster on the container, it just can't seem to sustain a load.)
Here's my host interfaces configuration:
Code:
auto lo
iface lo inet loopback
auto eno8303
iface eno8303 inet manual
auto eno8403
iface eno8403 inet manual
iface ens3f0np0 inet manual
iface ens3f1np1 inet manual
auto bond0
iface bond0 inet static
address xx.xx.xx.94/30
gateway xx.xx.xx.93
bond-slaves eno8303 eno8403
bond-miimon 100
bond-mode 802.3ad
bond-xmit hash-policy layer2+3
dns-nameservers 8.8.8.8 1.1.1.1
dns-search hostingdomaingoeshere.com
# dns-* options are implemented by the resolvconf package, if installed
post-up echo 1 > /proc/sys/net/ipv4/ip_forward # add this line
auto vmbr0
iface vmbr0 inet static
address 10.0.0.1
netmask 255.255.255.0
bridge-ports none
bridge-stp off
bridge-fd 0
post-up iptables -t nat -A POSTROUTING -s '10.0.0.0/24' -o bond0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.0.0.0/24' -o bond0 -j MASQUERADE
post-down iptables -t nat -F
post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
auto vmbr1
iface vmbr1 inet static
address xx.xx.xx.64/29
bridge-ports none
bridge-stp off
bridge-fd 0
up echo 1 > /proc/sys/net/ipv4/conf/vmbr1/proxy_arp
Here's the container's network configuration:
Code:
auto lo
iface lo inet loopback
auto eth1
iface eth1 inet static
address xx.xx.xx.65/29
gateway xx.xx.xx.64
up ip addr add xx.xx.xx.66/32 dev eth1
up ip addr add xx.xx.xx.67/32 dev eth1
auto eth0
iface eth0 inet static
address 10.0.0.2/24
Last edited: