Hey everyone,
I'm currently building a "proof-of-concept" for work using Proxmox.
I have a 4x1Gb LACP config (see below), but I get slow performance from my VMs.
I'm using a 400MBps NAS storage that works perfectly when a 10Gb non-vm client is tested.
On Proxmox I have 3, Win 2012 r2 VM's with the VirtIO NIC (latest release). Using the NAS manufacture's speed tool on 1 VM, I get 1Gb r/w speed. When testing 2 VMs, I get up to 2Gb aggregate reads and 1.4Gb aggregate write but it's not sustained performance. When I use 3 VMs, I get up to 2.5Gb aggregate reads and 1.4Gb aggregate writes but the performance is more erratic (larger and longer drops in performance). When I test 3 non-VM systems plugged into my switch, each will get 1Gb sustained r/w (315+MBps aggregate).
BTW...the physical switch shows it is connecting with LACP to the Proxmox server.
Questions =
1: When using 1 VM with VirtIO (10Gb link speed), why do I only get 1Gb speed when the vmbr has a 4Gb connection?
- I would think a 10Gb VirtIO NIC would use the maximum bandwidth the LAG could provide.
2: Why am I seeing such poor performance with multiple VMs using the 4Gb LAG?
- Seems like they are (poorly) fighting for limited bandwidth.
3: Is the vmbr switch limited in some way?
Config:
auto lo
iface lo inet loopback
iface eno4 inet manual
iface eno3 inet manual
iface eno1 inet manual
iface eno2 inet manual
iface enp132s0f0 inet manual
iface enp132s0f1 inet manual
auto bond0
iface bond0 inet manual
slaves eno1 eno2 eno3 eno4
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2+3
auto vmbr0
iface vmbr0 inet static
address x.x.255.20
netmask 255.255.255.0
gateway x.x.255.1
bridge_ports bond0
bridge_stp on
bridge_fd 0
auto vmbr1
iface vmbr1 inet manual
bridge_ports enp132s0f0 enp132s0f1
bridge_stp off
bridge_fd 0
Thanks!
I'm currently building a "proof-of-concept" for work using Proxmox.
I have a 4x1Gb LACP config (see below), but I get slow performance from my VMs.
I'm using a 400MBps NAS storage that works perfectly when a 10Gb non-vm client is tested.
On Proxmox I have 3, Win 2012 r2 VM's with the VirtIO NIC (latest release). Using the NAS manufacture's speed tool on 1 VM, I get 1Gb r/w speed. When testing 2 VMs, I get up to 2Gb aggregate reads and 1.4Gb aggregate write but it's not sustained performance. When I use 3 VMs, I get up to 2.5Gb aggregate reads and 1.4Gb aggregate writes but the performance is more erratic (larger and longer drops in performance). When I test 3 non-VM systems plugged into my switch, each will get 1Gb sustained r/w (315+MBps aggregate).
BTW...the physical switch shows it is connecting with LACP to the Proxmox server.
Questions =
1: When using 1 VM with VirtIO (10Gb link speed), why do I only get 1Gb speed when the vmbr has a 4Gb connection?
- I would think a 10Gb VirtIO NIC would use the maximum bandwidth the LAG could provide.
2: Why am I seeing such poor performance with multiple VMs using the 4Gb LAG?
- Seems like they are (poorly) fighting for limited bandwidth.
3: Is the vmbr switch limited in some way?
Config:
auto lo
iface lo inet loopback
iface eno4 inet manual
iface eno3 inet manual
iface eno1 inet manual
iface eno2 inet manual
iface enp132s0f0 inet manual
iface enp132s0f1 inet manual
auto bond0
iface bond0 inet manual
slaves eno1 eno2 eno3 eno4
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2+3
auto vmbr0
iface vmbr0 inet static
address x.x.255.20
netmask 255.255.255.0
gateway x.x.255.1
bridge_ports bond0
bridge_stp on
bridge_fd 0
auto vmbr1
iface vmbr1 inet manual
bridge_ports enp132s0f0 enp132s0f1
bridge_stp off
bridge_fd 0
Thanks!