Hello,
i have a curious network bandwith problem.
Here my install :
2 nodes proxmox v3.4, identical, with :
motherboard Tyan S5393, 2x intel gigabit.
1x intel 10 gigabits, DRBD dedicaced.
1x Intel PRO/1000 CT Desktop, Intel 82574L chipset.
1x Realtek 100 mbps.
a 3rd node for HA quorum.
I have a pfsense v2.2.4. Fresh install
4 network virtio card : LAN (intel motherboard), LAN2 (not used here), WAN1 and WAN2.
for iperf test :
192.168.1.20 : synology rs2212+
192.168.1.12 : proxmox02 (node 2)
192.138.1.253 : LAN virtio network card
"Hardware Checksum Offloading" disabled on pfsense.
All pluged on a cisco sg200 switch.
pfSense Guest run on proxmox02
* test with synology and host proxmox02 :
192.168.1.12 : iperf -s
192.168.1.20 : iperf -c
result : 941 mbits
192.168.1.20 : iperf -s
192.168.1.12 : iperf -c
result : 941 mbits too
* test with synology and LAN virtio network card (proxmox02 host) :
192.168.1.253 : iperf -s
192.168.1.20 : iperf -c
result : 148 mbits
192.168.1.20 : iperf -s
192.168.1.253 : iperf -c
result : 220 mbits
* test with proxmox02 host and and LAN virtio network card (not same physical card).
192.168.1.253 : iperf -s
192.168.1.12 : iperf -c
result : 150 mbits
192.168.1.12 : iperf -s
192.168.1.253 : iperf -c
result : 340 mbits
So, pfSense bandwith seam to keep around 150 mbits.
Same result with :
- change guest type network card to intel e1000.
- Run Guest on proxmox01 host.
- turn off checksum offload of physical ethernet card (ethtool -K eth0 tx off).
With Windows 7 pro guest, i have a bandwith around 500 - 600 Mbits.
I not at same location.
Maybie test with intel 1000 CT desktop when i could ?
An idea ? Thank you
i have a curious network bandwith problem.
Here my install :
2 nodes proxmox v3.4, identical, with :
motherboard Tyan S5393, 2x intel gigabit.
1x intel 10 gigabits, DRBD dedicaced.
1x Intel PRO/1000 CT Desktop, Intel 82574L chipset.
1x Realtek 100 mbps.
a 3rd node for HA quorum.
I have a pfsense v2.2.4. Fresh install
4 network virtio card : LAN (intel motherboard), LAN2 (not used here), WAN1 and WAN2.
for iperf test :
192.168.1.20 : synology rs2212+
192.168.1.12 : proxmox02 (node 2)
192.138.1.253 : LAN virtio network card
"Hardware Checksum Offloading" disabled on pfsense.
All pluged on a cisco sg200 switch.
pfSense Guest run on proxmox02
* test with synology and host proxmox02 :
192.168.1.12 : iperf -s
192.168.1.20 : iperf -c
result : 941 mbits
192.168.1.20 : iperf -s
192.168.1.12 : iperf -c
result : 941 mbits too
* test with synology and LAN virtio network card (proxmox02 host) :
192.168.1.253 : iperf -s
192.168.1.20 : iperf -c
result : 148 mbits
192.168.1.20 : iperf -s
192.168.1.253 : iperf -c
result : 220 mbits
* test with proxmox02 host and and LAN virtio network card (not same physical card).
192.168.1.253 : iperf -s
192.168.1.12 : iperf -c
result : 150 mbits
192.168.1.12 : iperf -s
192.168.1.253 : iperf -c
result : 340 mbits
So, pfSense bandwith seam to keep around 150 mbits.
Same result with :
- change guest type network card to intel e1000.
- Run Guest on proxmox01 host.
- turn off checksum offload of physical ethernet card (ethtool -K eth0 tx off).
With Windows 7 pro guest, i have a bandwith around 500 - 600 Mbits.
Code:
root@proxmox02:~# pveversion -V
proxmox-ve-2.6.32: 3.4-165 (running kernel: 3.10.0-13-pve)
pve-manager: 3.4-11 (running version: 3.4-11/6502936f)
pve-kernel-2.6.32-40-pve: 2.6.32-160
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-3.10.0-13-pve: 3.10.0-38
pve-kernel-3.10.0-8-pve: 3.10.0-30
pve-kernel-3.10.0-5-pve: 3.10.0-19
pve-kernel-3.10.0-11-pve: 3.10.0-36
pve-kernel-2.6.32-42-pve: 2.6.32-165
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-2.6.32-34-pve: 2.6.32-140
pve-kernel-2.6.32-31-pve: 2.6.32-132
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-19
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-11
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
root@proxmox02:~# pveversion -V
proxmox-ve-2.6.32: 3.4-165 (running kernel: 3.10.0-13-pve)
pve-manager: 3.4-11 (running version: 3.4-11/6502936f)
pve-kernel-2.6.32-40-pve: 2.6.32-160
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-3.10.0-13-pve: 3.10.0-38
pve-kernel-3.10.0-8-pve: 3.10.0-30
pve-kernel-3.10.0-5-pve: 3.10.0-19
pve-kernel-3.10.0-11-pve: 3.10.0-36
pve-kernel-2.6.32-42-pve: 2.6.32-165
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-2.6.32-34-pve: 2.6.32-140
pve-kernel-2.6.32-31-pve: 2.6.32-132
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-19
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-11
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
I not at same location.
Maybie test with intel 1000 CT desktop when i could ?
An idea ? Thank you