Hi,
OK I've got a bandwidth issue with Proxmox where it's not running nearly as fast as I think it should be. I'm using iperf to test speed.
Here's my current network setup... Switch is setup for bonding and this seems to be working fine according to the kernel messages and I can remove a NIC or two without issues. eth4 is PCI passed through to a pfSense VM so not used here.
	
	
	
		
So iperf performance from Proxmox to OpenVZ container (Ubuntu 12.04 64bit)...
	
	
	
		
That seems fine to me. More than 1 Gigabit which is all I really need.
KVM Windows Server 2012 with VirtIO Disk and Network to Proxmox host.
	
	
	
		
This is also fine, very similar to OpenVZ and the CPU usage jumps up but isn't maxing the machine at all, not even 50% of 2 virtual CPU's.
From the Server 2012 KVM machine to OpenVZ (Ubuntu 12.04) I get
	
	
	
		
Slower but I could live with that. Though not sure why as both these VM's are on the same host.
Now if I do a KVM Windows 7 VM with VirtIO to the Server 2012 KVM machine I get...
	
	
	
		
This is awful. It's still on the same host.
From an external Windows 7 machine to the Proxmox host I get...
	
	
	
		
Not good as it's Gigabit at each end.
To the Server 2012 KVM from the Windows 7 physical machine
	
	
	
		
It's got slower...
Windows 7 machine to a physical Windows Server 2012 machine, both gigabit but across 2 different switches where as Windows 7 machine is on same switch as the Proxmox host.
	
	
	
		
Sorry for so many test but something has to be wrong with my proxmox networking somewhere. This machine was running Server 2012 before hand with much better performance.
Any ideas on how I can improve this so the VM's can at least max a Gigabit link? I was hoping I could max a couple of them to different machines seeing as I've bonded but cannot even do a single one currently.
				
			OK I've got a bandwidth issue with Proxmox where it's not running nearly as fast as I think it should be. I'm using iperf to test speed.
Here's my current network setup... Switch is setup for bonding and this seems to be working fine according to the kernel messages and I can remove a NIC or two without issues. eth4 is PCI passed through to a pfSense VM so not used here.
		Code:
	
	# network interface settings
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
iface eth2 inet manual
iface eth3 inet manual
auto bond0
iface bond0 inet manual
        slaves eth0 eth1 eth2
        bond_miimon 100
        bond_mode 802.3ad
auto vmbr0
iface vmbr0 inet static
        address  172.28.18.250
        netmask  255.255.255.0
        gateway  172.28.18.1
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0
	So iperf performance from Proxmox to OpenVZ container (Ubuntu 12.04 64bit)...
		Code:
	
	Client connecting to 172.28.18.250, TCP port 5001
TCP window size: 23.8 KByte (default)
------------------------------------------------------------
[  3] local 172.28.18.25 port 44465 connected with 172.28.18.250 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.2 sec  2.49 GBytes  2.10 Gbits/sec
	That seems fine to me. More than 1 Gigabit which is all I really need.
KVM Windows Server 2012 with VirtIO Disk and Network to Proxmox host.
		Code:
	
	Client connecting to 172.28.18.250, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  3] local 172.28.18.15 port 55690 connected with 172.28.18.250 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  2.56 GBytes  2.19 Gbits/sec
	This is also fine, very similar to OpenVZ and the CPU usage jumps up but isn't maxing the machine at all, not even 50% of 2 virtual CPU's.
From the Server 2012 KVM machine to OpenVZ (Ubuntu 12.04) I get
		Code:
	
	Client connecting to kt-download, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  3] local 172.28.18.15 port 55694 connected with 172.28.18.25 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.09 GBytes   932 Mbits/sec
	Slower but I could live with that. Though not sure why as both these VM's are on the same host.
Now if I do a KVM Windows 7 VM with VirtIO to the Server 2012 KVM machine I get...
		Code:
	
	------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  4] local 172.28.18.15 port 5001 connected with 172.28.18.114 port 50977
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.1 sec   145 MBytes   121 Mbits/sec
	This is awful. It's still on the same host.
From an external Windows 7 machine to the Proxmox host I get...
		Code:
	
	------------------------------------------------------------
Client connecting to 172.28.18.250, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  3] local 172.28.18.249 port 63489 connected with 172.28.18.250 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   425 MBytes   357 Mbits/sec
	Not good as it's Gigabit at each end.
To the Server 2012 KVM from the Windows 7 physical machine
		Code:
	
	------------------------------------------------------------
Client connecting to kt-file, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  3] local 172.28.18.249 port 63519 connected with 172.28.18.15 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   312 MBytes   261 Mbits/sec
	It's got slower...
Windows 7 machine to a physical Windows Server 2012 machine, both gigabit but across 2 different switches where as Windows 7 machine is on same switch as the Proxmox host.
		Code:
	
	------------------------------------------------------------
Client connecting to 172.28.18.6, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  3] local 172.28.18.249 port 63550 connected with 172.28.18.6 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   763 MBytes   640 Mbits/sec
	Sorry for so many test but something has to be wrong with my proxmox networking somewhere. This machine was running Server 2012 before hand with much better performance.
Any ideas on how I can improve this so the VM's can at least max a Gigabit link? I was hoping I could max a couple of them to different machines seeing as I've bonded but cannot even do a single one currently.