VMs Network speed slowing down

No pb amlil

You're right, speedtest-cli return very ugly results... upload average of 300 Mb/s like you ;)

BUT :

I'm not personnaly really confident with this tool...

You know, i'm pragmatic, and for me the best BW test is a basic download/upload of file.

Here is results for simple wget command :

3.7 GB File (debian-8.5.0-amd64-DVD-1.iso) hosted on one LXC Container on the Proxmox host.

Client is a online.net server (1 Gb/s Both ways)

Code:
requ▒te HTTP transmise, en attente de la r▒ponse...200 OK
Longueur: 3992977408 (3,7G) [application/x-iso9660-image]
Sauvegarde en : ▒/dev/null▒

100%[=================================================================================================================>] 3 992 977 408  107M/s   ds 38s

2016-09-16 14:10:29 (100 MB/s) - ▒/dev/null▒ sauvegard▒ [3992977408/3992977408]

it is a concrete test !

in other way, iperf return this result (Same server/LXC container) :



So... what should we think about this ?

Hi @PhilV, i´m sorry for delay in answer. Yes, i can confirm that speedtest is not fiable in this case. I have done test like yours, downloading file from VM and really i can download about 100 Mb/s, this means 800 Mbps -for upload- that is great.

So at this time, all it seems working as expected, i hope you the same! ;)

Greetings!
 
Last edited:
The speed tests from this morning clearly shows that OVH's hardware are overloaded during daily traffic.

And proves that the issue is NOT software related!

Yes, it seems exactly the same issue. You should contact ovh support (if you do not contact already) and send test and iperfs.

It is supposed now, that for OVH is a known issue. I hope for your case the best...good luck! ;)
 
Since this morning, the speed slow down again....

On dedicated server:

Code:
wget http://proof.ovh.net/files/10Gb.dat
2016-09-23 12:22:51 (111 MB/s)

on VM:
Code:
wget http://proof.ovh.net/files/10Gb.dat
2016-09-23 12:35:28 (1,91 MB/s)

While the iperf values are ok.

It becomes really frustrating...
 
Gosh... and here I thought I was going nuts. This is the *EXACT* issue I've seen happening for the past month or two, if not for almost a full year considering how it can come and go.

SoYouStart 32G server (E3-1245v2, on a Intel 82574L NIC) BHS2 Datacenter

Here's the best part: it happens on XenServer 6.5 (SP1) as well! I made the full migration to Proxmox only to find that it's happening here too. XenServer had my uploads to the server vary in speed until rebooting the VM. We're talking fluctuating throughput speeds as bad as 1-3 Mbps.

I'll be opening a ticket (again) with OVH and giving them a call about it, though I'm not getting my hopes up at this point after reading through the 4 pages here. I'll be sure to reference the thread. ~_~

I've attached a fun example graph of how the throughput drops off uploading from my 25 Mbps line into the server. Steady going until... *boom*... random timing. Usually takes about 5-15 minutes for it to then drop off in speed like this which then requires the container to be shut down and restarted. I can replicate this flawlessly every time.

I'm at a loss for what's happening.
 

Attachments

  • Screenshot 2016-09-23 21.39.07.png
    Screenshot 2016-09-23 21.39.07.png
    44.2 KB · Views: 9
Last edited:
Gosh... and here I thought I was going nuts. This is the *EXACT* issue I've seen happening for the past month or two, if not for almost a full year considering how it can come and go.

SoYouStart 32G server (E3-1245v2, on a Intel 82574L NIC) BHS2 Datacenter

Here's the best part: it happens on XenServer 6.5 (SP1) as well! I made the full migration to Proxmox only to find that it's happening here too. XenServer had my uploads to the server vary in speed until rebooting the VM. We're talking fluctuating throughput speeds as bad as 1-3 Mbps.

I'll be opening a ticket (again) with OVH and giving them a call about it, though I'm not getting my hopes up at this point after reading through the 4 pages here. I'll be sure to reference the thread. ~_~

I've attached a fun example graph of how the throughput drops off uploading from my 25 Mbps line into the server. Steady going until... *boom*... random timing. Usually takes about 5-15 minutes for it to then drop off in speed like this which then requires the container to be shut down and restarted. I can replicate this flawlessly every time.

I'm at a loss for what's happening.

I got a reply from OVH that they will not do anything about it.
Since this morning, the speed slow down again....

On dedicated server:

Code:
wget http://proof.ovh.net/files/10Gb.dat
2016-09-23 12:22:51 (111 MB/s)

on VM:
Code:
wget http://proof.ovh.net/files/10Gb.dat
2016-09-23 12:35:28 (1,91 MB/s)

While the iperf values are ok.

It becomes really frustrating...


Can confirm it's still an issue:

Download test file from HOST IP.

Code:
root@server4:/home/user# wget 10Gb.dat
--2016-09-24 13:48:48--  10Gb.dat
Resolving proof.ovh.net (proof.ovh.net)... 188.165.12.106, 2001:41d0:2:876a::1
Connecting to proof.ovh.net (proof.ovh.net)|188.165.12.106|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1250000000 (1.2G) [application/octet-stream]
Saving to: ‘10Gb.dat’

10Gb.dat                                  100%[=====================================================================================>]   1.16G   112MB/s   in 11s

2016-09-24 13:48:59 (112 MB/s) - ‘10Gb.dat’ saved [1250000000/1250000000]

Download test file from failover IP.

Code:
root@vpn:/home/deploy# wget 10Gb.dat
--2016-09-24 11:49:04--  10Gb.dat
Resolving proof.ovh.net (proof.ovh.net)... 188.165.12.106, 2001:41d0:2:876a::1
Connecting to proof.ovh.net (proof.ovh.net)|188.165.12.106|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1250000000 (1.2G) [application/octet-stream]
Saving to: '10Gb.dat.1'

7% [========>                                                                                                                    ] 93,763,525   588KB/s  eta 28m 44s
 
There is only one bridge, vmbr0, it uses the same network model as OVH's guides suggests and with IP failover + virtual mac.

My servers have 2 NIC's, one external with the IP what u get when u order your server and one NIC internal with vRack, this is the NIC where u bind your VM's. Is your setup the same or not?
 
Hi,
Sorry to read that the issue is not resolved for everryone.
For my part, everything is fine now...
I hope it will not come again !

I can just suggest you to continue with OVH support !

Good luck,
 
My servers have 2 NIC's, one external with the IP what u get when u order your server and one NIC internal with vRack, this is the NIC where u bind your VM's. Is your setup the same or not?

It's not, we are not using vRack, but route to the main IP of the common NIC.
 
Hi,
Sorry to read that the issue is not resolved for everryone.
For my part, everything is fine now...
I hope it will not come again !

I can just suggest you to continue with OVH support !

Good luck,

Are you using vRack on a second NIC for your VM's?
 
Hi,
Sorry to read that the issue is not resolved for everryone.
For my part, everything is fine now...
I hope it will not come again !

I can just suggest you to continue with OVH support !

Good luck,

Good, i guess if OVH doesn't do anything about this - We will just have to change datacenter.
 
Good, i guess if OVH doesn't do anything about this - We will just have to change datacenter.
I'm still working with them (as I type this) on this ticket I have open this morning. I provided a simple step-by-step they could follow to test with another (SYS) server running proxmox 4.2. I'm not sure if they plan to go that extra mile as of yet though.

Either way, I had the OVH rep/tech run a test with iperf3 to my VM, which ended up in full 250 Mbps speeds (most likely BHS>BHS). Now he appears to be testing from elsewhere (GRA>BHS perhaps?) and seeing the 1-5 Mbps limitation (and with 5 concurrent connections hitting ~25 Mbps or so in total). So even though it's inter-datacenter (France to Canada) the speeds plummet.

Testing from my TWC (to BHS) connection shows only slightly faster speeds.

Code:
iperf3 -c bhs
Connecting to host bhs, port 5201
[  4] local 192.168.0.6 port 36322 connected to port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  1.97 MBytes  16.5 Mbits/sec    1   63.6 KBytes
[  4]   1.00-2.00   sec  1.43 MBytes  12.0 Mbits/sec    3   42.4 KBytes
[  4]   2.00-3.00   sec   764 KBytes  6.25 Mbits/sec    4   33.9 KBytes
[  4]   3.00-4.00   sec  1.30 MBytes  10.9 Mbits/sec    1   48.1 KBytes
[  4]   4.00-5.00   sec  1.12 MBytes  9.38 Mbits/sec    3   43.8 KBytes
[  4]   5.00-6.00   sec  1.12 MBytes  9.38 Mbits/sec    5   42.4 KBytes
[  4]   6.00-7.00   sec   954 KBytes  7.82 Mbits/sec    8   31.1 KBytes
[  4]   7.00-8.00   sec   764 KBytes  6.26 Mbits/sec    3   33.9 KBytes
[  4]   8.00-9.00   sec  1.12 MBytes  9.38 Mbits/sec    1   45.2 KBytes
[  4]   9.00-10.00  sec  2.05 MBytes  17.2 Mbits/sec    0   72.1 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  12.5 MBytes  10.5 Mbits/sec   29             sender
[  4]   0.00-10.00  sec  12.2 MBytes  10.3 Mbits/sec                  receiver

iperf Done.

My France/OVH GRA-datacenter-located box to BHS:

Code:
iperf3 -c bhs
Connecting to host bhs, port 5201
[  4] local GRA port 33466 connected to port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.01   sec  1.07 MBytes  8.94 Mbits/sec   13   89.1 KBytes
[  4]   1.01-2.01   sec   840 KBytes  6.86 Mbits/sec    3   49.5 KBytes
[  4]   2.01-3.00   sec   492 KBytes  4.06 Mbits/sec    2   41.0 KBytes
[  4]   3.00-4.00   sec   506 KBytes  4.14 Mbits/sec    1   33.9 KBytes
[  4]   4.00-5.01   sec   362 KBytes  2.96 Mbits/sec    1   31.1 KBytes
[  4]   5.01-6.01   sec   404 KBytes  3.30 Mbits/sec    0   39.6 KBytes
[  4]   6.01-7.00   sec   455 KBytes  3.76 Mbits/sec    0   48.1 KBytes
[  4]   7.00-8.01   sec   484 KBytes  3.95 Mbits/sec    4   42.4 KBytes
[  4]   8.01-9.01   sec   430 KBytes  3.51 Mbits/sec    3   25.5 KBytes
[  4]   9.01-10.00  sec   308 KBytes  2.54 Mbits/sec    0   32.5 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  5.25 MBytes  4.41 Mbits/sec   27             sender
[  4]   0.00-10.00  sec  4.99 MBytes  4.18 Mbits/sec                  receiver

iperf Done.

Lets see where they go with this. The puzzling part is one would think this is congestion on the network (at 11 AM ET?) but rebooting the VM fixes it, which throws that idea right out the window. ~_~
 
Lets see where they go with this. The puzzling part is one would think this is congestion on the network (at 11 AM ET?) but rebooting the VM fixes it, which throws that idea right out the window. ~_~

I think it's because the connection is reseted on the switch that causes it, I've seen high speeds for about 2-3 seconds and then go down FAST. It seems this effect is larger when you reboot the VM (You get the same effect if you reset the network on the VM too). I'm sure it's not Proxmox, because in early mornings when there's barely any traffic, the speeds keep being stable and fast.
 
we have 4 nodes running in strassborg. no issue like this, all are on vrack like they recommend.
 
SBG 2
I think u should follow there advice and use the vrack link they mention!

Sorry see now: 2 x SBG1 and 2 x SBG2
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!