Fast connection between containers

mackuz

Renowned Member
Jun 26, 2016
18
0
66
49
Russia
Hi.

I'm new to networking, so I'm looking for the right way to configure my LXC containers (Proxmox VE 4.4, one host, several Ubuntu 16.04 LXC containers) for better performance in data transferring between them.

Thanx for any help.
 
if both containers are on the same bridge and in the same network, they should already be fast
i get ~75GBit/sec between two containers on the same bridge/network
 
  • Like
Reactions: mackuz
Thank You!

Here are my /etc/network/interfaces:
on the host
(vmbr0 and vmbr1 are bridges to physical networks, I'm planning to remove them and make a bond with eth0 and eth1
and vmbr2 is completely virtual for interaction between containers):

Code:
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

auto vmbr0
iface vmbr0 inet static
    address  172.25.248.100
    netmask  255.255.255.0
    gateway  172.25.248.254
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0
    network 172.25.248.0

auto vmbr1
iface vmbr1 inet static
    address  172.25.248.101
    netmask  255.255.255.0
    gateway  172.25.248.254
    bridge_ports eth1
    bridge_stp off
    bridge_fd 0
    network  172.25.248.0

auto vmbr2
iface vmbr2 inet manual
    bridge_ports none
    bridge_stp off
    bridge_fd 0

        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up   iptables -t nat -A POSTROUTING -s '10.10.100.0/24' -o vmbr1 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '10.10.100.0/24' -o vmbr1 -j MASQUERADE
and on one of the containers:
Code:
auto lo
iface lo inet loopback
iface lo inet6 loopback

# interfaces(5) file used by ifup(8) and ifdown(8)
# Include files from /etc/network/interfaces.d:
source-directory /etc/network/interfaces.d

auto eth0
iface eth0 inet static
    address 172.25.248.130
    netmask 255.255.255.0
    gateway 172.25.248.254
    network 172.25.248.0

auto eth1
iface eth1 inet static
    address 10.10.100.130
    netmask 255.255.255.0

And speed test on container.
Inside container:
Code:
iperf3 -c localhost
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  23.4 GBytes  20.1 Gbits/sec    0             sender
[  4]   0.00-10.00  sec  23.4 GBytes  20.1 Gbits/sec                  receiver
Using bridge to real network card:
Code:
iperf3 -c 172.25.248.130
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  19.1 GBytes  16.4 Gbits/sec    0             sender
[  4]   0.00-10.00  sec  19.1 GBytes  16.4 Gbits/sec                  receiver
Using virtual network:
Code:
iperf3 -c 10.10.100.130
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  17.7 GBytes  15.2 Gbits/sec    0             sender
[  4]   0.00-10.00  sec  17.7 GBytes  15.2 Gbits/sec                  receiver

It's very slow. And vmbr2 network is even slower.
What am I doing wrong?
 
i would not define 16Gbit/s as "very slow", this is more than most hardware can handle

but to clarify, where is the iperf server and where does the client run? do you have the firewall activated?
what is your cpu/memory configuration of the host?
 
  • Like
Reactions: mackuz
i would not define 16Gbit/s as "very slow", this is more than most hardware can handle
It's a bit slower than You have posted in #2 :)
but to clarify, where is the iperf server and where does the client run?
Both are running on the same host and have similar configurations, using subvolume format.
do you have the firewall activated?
As far as I know, I don't. I mean, it's disabled in Proxmox Web UI. I don't know, if it's enabled somehow else on the host, or on clients, or on the router (I don't have access to router).
what is your cpu/memory configuration of the host?
On host:
Code:
CPUs: 8 x Intel(R) Xeon(R) CPU E5430 @ 2.66GHz (2 Sockets)
Memory: 16Gb.
On guests:
Code:
CPU limit: 6,
CPU units: 8192,
RAM: 8Gb,
SWAP: 2GB.

Nevertheless, vmbr2 is working a bit slower than hardware networks, what I cannot understand.

And, by the way, can I ask You another one question? Can I safely remove vmbr0 and vmbr1 bridges on host, create a bond instead of them and change configs on containers, or I'll brake all my network that way?
 
Sorry, I've just checked firewall settings, and it have no any rule string, but it's running. Should I disable it? This server is located in protected network.
 
It's a bit slower than You have posted in #2 :)
but i have here a 6700k with 4.2 ghz single core turbo and ddr4-2666
the virtual network is mostly cpu/memory limited

Both are running on the same host and have similar configurations, using subvolume format.
i meant if it is running on the host, the container or different containers etc.

CPUs: 8 x Intel(R) Xeon(R) CPU E5430 @ 2.66GHz (2 Sockets)
since one network connection over a virtual network is limited to one core, i think this is your limitation

Sorry, I've just checked firewall settings, and it have no any configuration, but it's running. Should I disable it? This server is located in protected network.
you can try it and see if it makes any difference, but it should not make much of a difference

Nevertheless, vmbr2 is working a bit slower than hardware networks, what I cannot understand.
what are the exact numbers? what is the hardware capable of?
 
  • Like
Reactions: mackuz
i meant if it is running on the host, the container or different containers etc.
On different containers.
since one network connection over a virtual network is limited to one core, i think this is your limitation
Is there any way to overcome this limitation?
what are the exact numbers? what is the hardware capable of?
16.4 Gbits/sec through vmbr0, which is connected to eth0 (Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet (rev 12)). And eth1 has the same configuration.
And 15.2 Gbits/sec trough vmbr2, which is virtual.
 
but your network card is only capable of gigabit speed afaics ? which speeds did you expect between containers ?
 
but your network card is only capable of gigabit speed afaics ? which speeds did you expect between containers ?
Does vmbr2 have hardware limitations? It is not connected to any NIC. And it's speed is 15.2, but vmbr0's speed is 16.4. And vmbr0 is limited by NIC's speed.
I was hoping, that vmbr2 will be much faster, as it's completely virtual.
 
Does vmbr2 have hardware limitations? It is not connected to any NIC. And it's speed is 15.2, but vmbr0's speed is 16.4. And vmbr0 is limited by NIC's speed.
I was hoping, that vmbr2 will be much faster, as it's completely virtual.
vmbr0 would be only hardware limited if you really send and receive via the nic (which is not the case here, as all participants are directly on the bridge)
you can imagine a linux bridge like a virtual switch, if all participants of a packet are directly connected to the bridge, it never leaves the host and does not touch the real nic
(even if the nic is also "plugged" into the bridge)

as for the difference on the different bridges, i have no idea, but i would guess a variance from the testing ? (you can repeat the test and take the average for more consistent results)
 
Disabled firewall on the host and on every container, rebooted. vmbr2 is still 1Gbit slower than vmbr0.

Thinking about moving from vmbr0/vmbr1 to one bond and using it.

Can I create something like e1000 or vmxnet for LXC containers?
 
Can I create something like e1000 or vmxnet for LXC containers?
no this does not work for containers, but even if you could they are both only 1 gbit which is 16 times slower than you get...
i am sorry to ask this, but are you sure you don't mistake gbit for mbit? 16gbit is 16 times more than a normal ethernet nic and not slow at all
 
no this does not work for containers, but even if you could they are both only 1 gbit which is 16 times slower than you get...
i am sorry to ask this, but are you sure you don't mistake gbit for mbit? 16gbit is 16 times more than a normal ethernet nic and not slow at all
And this speed is adequate for hardware RAID 10 with SAS drives?
If so, I was mistaken, sorry. And thank You for being so patient.

And last question: can I remove two bridges I created on my cluster with working containers, assign IP addresses (another two than I have now for vmbr0 and1 bridges, I want to remove) to physical eth0 and eth1 and create a bond (which one is faster, by the way, Linux or OVS?) with old IP, vmbr0 had, or it's too risky for existed configuration?
Something like that.
Was:
Code:
eth0 - no IP
eth1 - no IP
vmbr0 -> eth0, 172.25.248.100
vmbr1 -> eth1, 172.25.248.101
Will be:
Code:
eth0, 172.25.248.98
eth1, 172.25.248.99
bond0 -> eth0 and eth1, 172.25.248.100
Will system be alive after reboot?
I'll correct all networks in containers, of course.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!