Slow Network Speeds in Containers in Guests

alistairm

New Member
Oct 4, 2024
3
0
1
Hi All,

We recently deployed Proxmox and have been pretty impressed so far. However we've run into an issue where the network speeds inside docker containers on guests are super slow (~50-100kB/s) whereas the Guest itself is getting about 40-50MB/s. The Host has a dual 100Gb link bonded in LACP back to our 100Gb TOR switch. The guests are running on a VLAN on that bond. As a test we took a single 1Gb link and created a VLAN on that for our guests and that seems to allow the containers to run at full speed, so not sure if it's something to do with the bond that's causing it.

We're Running PVE 8.2.7 on kernel 6.5.13-5-pve.
Guest OS is Ubuntu 22.04 with Docker as the container engine.
 
Hey,

how is the packet loss? A bit of a wild guess, but is it possible the configured MTUs along your network don't always match up, other than that I can't really think of anything that might explain this, since the speed is fine for the VM itself. Have you tried different docker images, do different containers have different speeds?
 
Hi @Hannes Laimer , I did consider the MTU, I tried it at 9000 as that's what our switch is set to and also at 1500 and 1400. Judging by general pings there doesn't seem to be any packet loss. I haven't tried different containers though. I'll give that a go and feedback.
 
Could you
Code:
iperf3 -s
in the VM and
Code:
iperf3 -c <VM_IP>
in the container?

You might have to start the container with
Code:
docker run --network=host ...
 
Hi all,

I'm facing a similar issue. Network are normal from the VM but seems limited to ~250kB/s when inside a container.

I'm running PVE 8.27 on kernel 6.8.12-2-pve.
My guest OS is Debian 12 with the latest docker engine

I've tried iperf3 between guest and container as suggested and network speed is good.
When i use the network host (--network=host) the problem disappear.

I've also consider MTU related issues, but it doesn't seems to be the issue.

Bash:
ping 8.8.8.8 -c 4 -M do -s 1472

Code:
PING 8.8.8.8 (8.8.8.8) 1472(1500) bytes of data.
1480 bytes from 8.8.8.8: icmp_seq=1 ttl=113 time=4.64 ms
1480 bytes from 8.8.8.8: icmp_seq=2 ttl=113 time=4.65 ms
1480 bytes from 8.8.8.8: icmp_seq=3 ttl=113 time=4.65 ms
1480 bytes from 8.8.8.8: icmp_seq=4 ttl=113 time=4.61 ms

--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 4.607/4.634/4.648/0.016 ms


Here is my bridge configuration on the hypervisor :
Code:
auto vmbr1
iface vmbr1 inet manual
    bridge-ports enp2s0f1np1
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094

I've also tried without vlan's with the same result :(

If any network genious has an idea that would be great.
 
On the proxmox host, do a:

# ethtool -k <interface-name>
(thats lower case 'k')
check the value of 'generic-receive-offload', if it is 'on', then try:

# ethtool -K <interface-name> gro off
(thats upper case 'k')

see if that helps....
 
  • Like
Reactions: Hannes Laimer
Hi,

It works, thanks a lot for your help.

In case that's usefull for any one my network card uses BCM57502 NetXtreme-E whip.
 
Thanks
# ethtool -K <interface-name> gro off
(thats upper case 'k')

Worked for me as well, thanks for the tip! Now we have really consistent times:

time_namelookup: 0.000539s
time_connect: 0.000876s
time_appconnect: 0.000000s
time_pretransfer: 0.000920s
time_redirect: 0.000000s
time_starttransfer: 0.035822s
----------
time_total: 0.036924s
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!