Network Bonding / teaming / aggregation

hotwired007

Member
Sep 19, 2011
533
7
16
UK
i've been playing with my new 2.1 test cluster to find the best way for teaming / bonding my network ports and whether it gives a notable difference.

My test enviroment is
2x Dell PowerEdge 860 (Dual Core Xeon 2.2Ghz or 1.86Ghz/8GB RAM/2x 1GB NICs)
Netgear GS724T Managed Switch (I was using a GS716T but this seems to be wobbly - although it could just be my model)
Netgear ReadyNAS 8TB (2x 1GB NICs)

I used Crystal Disk on a W2k3 (P2V) and W2k8 box (VM) to compare the results - on each VM i noticed that the speeds were identical (i had an issue with the HDD in PIO mode on the W2k3 box - after id fixed it i noticed they were poractiaclly the same.)

I'm using an iSCSI connection to the Readynas with an LVM on Proxmox.

The HDDs are RAW files. Using the IDE mode.

All of the teams are configured to use ALB (Adaptive Load Balance) as this seems to give the best throughput. (Tried configuring trunking/LAG through the netgear switch and this just caused issues)

I also have Jumbo Frames enambled on the ReadyNAS and switch.

When each device only had 1 NIC enabled i would only get approximately 50Mb read /40MB write (Sync) and 3.4 / 3.2 (Random).

When the NAS had 2 NICs enabled i started getting 60Mb read but still 40mb write (sync) and 5.1 / 4.8 (random)

Now I have 2 NIC enabled on aach of my servers and im getting roughly 65Mbs Read and 50MBs write on syncronus transfers and on on random its getting 5.5-6Mbs read and write (I know the 65Mbs/55Mbs isnt very fast but the random seek seems much more of an improvement)

Any advice on improving the speed would be appreciated.
 
Last edited:
If you are trying to test network speed then I would take the NAS factor away (and would use Linux with virtio network adapter if you can). Try to test one thing at a time only, mixing a lot of variables in the same test will not help you figure out the bottlenecks in your set-up.

Now, if your link is capable of 50/40Mb per second for the worst case performance, why would you think that the very same link is a bottleneck for random read/write which is only using about a tenth of the bandwidth? I am missing the logic here.
 
Im not testing network speed, im trying to increase the VM hard disk transfer speeds.

When i have done comparitive tests against local PC harddrives, the sync transfer speeds have been faster but the random speeds have been much lower.

for example my poweredge 1950 with 2 drives gets 70/64 and 1.3/2.1 in proxmox 2, but gets 124/92 and 5.1/2.1 in windows.

most of the time it will be small random read/writes on my servers rather than continuous.

The question i rasied was why is one single network card FASTER for everything than 2 network cards teamed when in reality there should be more bandwidth, the bottleneck appears to be the network - the question is why and how can i increase performance without loosing the teaming aspect.
 
because a bond does not increase the bandwidth between 2 points. you want to use mpio for that.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!