auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
auto vmbr0
iface vmbr0 inet static
address 172.25.248.100
netmask 255.255.255.0
gateway 172.25.248.254
bridge_ports eth0
bridge_stp off
bridge_fd 0
network 172.25.248.0
auto vmbr1
iface vmbr1 inet static
address 172.25.248.101
netmask 255.255.255.0
gateway 172.25.248.254
bridge_ports eth1
bridge_stp off
bridge_fd 0
network 172.25.248.0
auto vmbr2
iface vmbr2 inet manual
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.100.0/24' -o vmbr1 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.100.0/24' -o vmbr1 -j MASQUERADE
auto lo
iface lo inet loopback
iface lo inet6 loopback
# interfaces(5) file used by ifup(8) and ifdown(8)
# Include files from /etc/network/interfaces.d:
source-directory /etc/network/interfaces.d
auto eth0
iface eth0 inet static
address 172.25.248.130
netmask 255.255.255.0
gateway 172.25.248.254
network 172.25.248.0
auto eth1
iface eth1 inet static
address 10.10.100.130
netmask 255.255.255.0
iperf3 -c localhost
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 23.4 GBytes 20.1 Gbits/sec 0 sender
[ 4] 0.00-10.00 sec 23.4 GBytes 20.1 Gbits/sec receiver
iperf3 -c 172.25.248.130
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 19.1 GBytes 16.4 Gbits/sec 0 sender
[ 4] 0.00-10.00 sec 19.1 GBytes 16.4 Gbits/sec receiver
iperf3 -c 10.10.100.130
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 17.7 GBytes 15.2 Gbits/sec 0 sender
[ 4] 0.00-10.00 sec 17.7 GBytes 15.2 Gbits/sec receiver
It's a bit slower than You have posted in #2i would not define 16Gbit/s as "very slow", this is more than most hardware can handle
Both are running on the same host and have similar configurations, using subvolume format.but to clarify, where is the iperf server and where does the client run?
As far as I know, I don't. I mean, it's disabled in Proxmox Web UI. I don't know, if it's enabled somehow else on the host, or on clients, or on the router (I don't have access to router).do you have the firewall activated?
On host:what is your cpu/memory configuration of the host?
CPUs: 8 x Intel(R) Xeon(R) CPU E5430 @ 2.66GHz (2 Sockets)
Memory: 16Gb.
CPU limit: 6,
CPU units: 8192,
RAM: 8Gb,
SWAP: 2GB.
but i have here a 6700k with 4.2 ghz single core turbo and ddr4-2666It's a bit slower than You have posted in #2
i meant if it is running on the host, the container or different containers etc.Both are running on the same host and have similar configurations, using subvolume format.
since one network connection over a virtual network is limited to one core, i think this is your limitationCPUs: 8 x Intel(R) Xeon(R) CPU E5430 @ 2.66GHz (2 Sockets)
you can try it and see if it makes any difference, but it should not make much of a differenceSorry, I've just checked firewall settings, and it have no any configuration, but it's running. Should I disable it? This server is located in protected network.
what are the exact numbers? what is the hardware capable of?Nevertheless, vmbr2 is working a bit slower than hardware networks, what I cannot understand.
On different containers.i meant if it is running on the host, the container or different containers etc.
Is there any way to overcome this limitation?since one network connection over a virtual network is limited to one core, i think this is your limitation
16.4 Gbits/sec through vmbr0, which is connected to eth0 (Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet (rev 12)). And eth1 has the same configuration.what are the exact numbers? what is the hardware capable of?
Does vmbr2 have hardware limitations? It is not connected to any NIC. And it's speed is 15.2, but vmbr0's speed is 16.4. And vmbr0 is limited by NIC's speed.but your network card is only capable of gigabit speed afaics ? which speeds did you expect between containers ?
vmbr0 would be only hardware limited if you really send and receive via the nic (which is not the case here, as all participants are directly on the bridge)Does vmbr2 have hardware limitations? It is not connected to any NIC. And it's speed is 15.2, but vmbr0's speed is 16.4. And vmbr0 is limited by NIC's speed.
I was hoping, that vmbr2 will be much faster, as it's completely virtual.
no this does not work for containers, but even if you could they are both only 1 gbit which is 16 times slower than you get...Can I create something like e1000 or vmxnet for LXC containers?
And this speed is adequate for hardware RAID 10 with SAS drives?no this does not work for containers, but even if you could they are both only 1 gbit which is 16 times slower than you get...
i am sorry to ask this, but are you sure you don't mistake gbit for mbit? 16gbit is 16 times more than a normal ethernet nic and not slow at all
eth0 - no IP
eth1 - no IP
vmbr0 -> eth0, 172.25.248.100
vmbr1 -> eth1, 172.25.248.101
eth0, 172.25.248.98
eth1, 172.25.248.99
bond0 -> eth0 and eth1, 172.25.248.100