Slow network transfers on guest OS

Tumbleweed

New Member
Mar 3, 2020
5
0
1
Iowa, US
Hello! I'm new to proxmox and I'm experiencing slow network transfer speeds on a guest OS that I have been experimenting with. This is for a home FTPS server running windows server 2012 R2. I am new to proxmox and virtualization in general and I'm having slower network transfer speeds now that I am virtualizing my FTPS server. I have had this exact setup before on a dedicated system with much better transfer speeds over the network. I have gone through the steps to install the VirtIO drivers for windows (the guide I used: https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows) and I am successfully booting into windows and also have the network card successfully set to "VirtIO (paravirtualized)". However after all that I still cant seem to reach better than 11Mbps TO or FROM. Is this a proxmox network issue? I am new to proxmox and other than the changes I've already noted I have only added a second storage drive. Everything else is default. has anyone here run into the same issue?

Physical Hardware:
Ryzen5 2600 6-core
Aorus b450 mobo
16gb ram (non-ECC)
Kingston 120gb SSD (proxmox boot)
Samsung 970evo 512gb NVME (VM boot)
seagate 1TB 5400rpm HDD (VM storage)
 
Hi,

Please post config for windows qm config <VMID> and config for host network cat /etc/network/interfaces as well.
 
Thanks for your response! the info you asked for is below!

-------------qm config 100--------------------
balloon: 512
boot: c
bootdisk: virtio0
cores: 4
cpu: host
ide2: none,media=cdrom
memory: 4000
name: WIN-SERVER
net0: virtio=76:D0:1C:33:0A:0A,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: win8
scsihw: virtio-scsi-pci
smbios1: uuid=01f10b6b-59a1-411e-a8de-5100a43eec6b
sockets: 1
usb0: host=174c:55aa,usb3=1
virtio0: Main_Share:vm-100-disk-0,size=200G
virtio1: Spinning_Rust:vm-100-disk-0,size=557328630K
vmgenid: 042db3df-f2a8-4630-bb4d-abe96c6c568c

-------------cat /etc/network/interfaces --------------------
auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.1.70
netmask 255.255.255.0
gateway 192.168.1.1
bridge_ports eno1
bridge_stp off
bridge_fd 0
 
Hi,

If still slow try to change model network to E1000, for more informations [0]

https://pve.proxmox.com/wiki/Paravirtualized_Network_Drivers_for_Windows#Alternative:_e1000

I hope this helps!

Unfortunately this is where I started and was the motivation for installing the VirtIO drivers. I had been getting MUCH better transfer speeds when I was running on dedicated hardware. I had started on the Intel E1000 setting and was getting only about 11Mbps either direction. Now after installing VirtIO i am still having slow network transfer speeds. Also, I have verified with crystal disk mark that my C: drive is now running at a speed i would expect from the Samsung 970EVO.

Tested = no change
 
Will keep an eye on this thread. Have the same issue. I just moved from XenServer where all worked a charm, and am very keen to stick with Proxmox

Expected disk speeds and the like are fine inside the VM, but any form of LAN access maxes out at about 12MBps for the windows 7 VMs. Regardless of the network card passed to the VM / if I do or dont use the VirtIO drivers etc.

In the interest of adding info mine is below. Yes the baremetal host has 4 ethernet ports, just using one at the moment. Note that speed to the host itself is fine. Just the Windows 7 VM that isnt seeing full nic speeds.

----------qm config 103----------
boot: dcn
bootdisk: sata0
cores: 2
cpu: host
ide2: none,media=cdrom
localtime: 1
memory: 4096
name: Leviathan
net0: virtio=A2:94:05:F7:04:6E,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
sata0: VM-SAS-3TB-D1-P1-2TB:103/vm-103-disk-0.qcow2,size=25G
sata1: /dev/disk/by-id/ata-HGSTsize=3907018584K
sata2: /dev/disk/by-id/ata-ST32000542ASsize=2000394706432
sata3: /dev/disk/by-id/ata-ST32000542ASsize=2000397852160
sata4: /dev/disk/by-id/ata-ST3160023ASsize=160040803840
sata5: /dev/sdb2,size=801568660992
smbios1: uuid=1a6c8ea2-23a2-4ad9-9fc7-943541cd841e
sockets: 2
startup: order=2
vmgenid: 87387629-0267-4f14-b939-0e0afb5b4636

---------- cat /etc/network/interfaces ----------
auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
address 10.0.0.101
netmask 255.255.255.0
gateway 10.0.0.5
bridge_ports eno1
bridge_stp off
bridge_fd 0

iface eno2 inet manual

iface eno3 inet manual

iface eno4 inet manual
 
What speeds are you getting between Proxmox hosts using iperf?

What speeds are you getting between two Guests on the same node?

What speeds are you getting between two Guests on different nodes?
 
Evening, and thanks for the reply.

I did try repeating the tests numerous times. Long and short of it is as follows, given the recent developments I have had a good bit of time on my hands so just ended up reinstalling all (including rerolling the Windows VM) and it seemed to resolve the issue. From what I can tell nothing changed as all was a base install with the same settings, and same with the windows VM from scratch. Note that I did let the VM update fully on each install (one with the issue, and the one afterwards) all was using a valid Windows key bought specifically for the VM to use.

During the issue :-
Node to external wired Linux unit : 982 mbit (ave)
Node VM Linux to Node VM Linux : 467 mbit (ave)
Node VM Win7 to external Linux file transfer SMB 2.1: 11 mbit (ave)

Post reinstall :-
Node to external wired Linux unit : 974 mbit (ave)
Node VM Linux to Node VM Linux : 466 mbit (ave)
Node VM Win7 to external Linux file transfer SMB 2.1: 870 mbit (ave)

Given that this will do for my needs I'm not going to look too much deeper into it. I did try all the variations of the network driver during the time the issue was experienced with only a 1 or 2 mbit variance. Disks on remote unit is a PCIe M.2, disk on Proxmox was a dedicated SATA M.2 in a SATA shroud which worked at near enough SATA saturation speeds during tests. I did try several times to either port over the original working Citrix VM right down to reinstalling (with testing before and after Windows Update patching including SP1 for windows 7).

Whilst I nipped in on the OP thread, my issue is sorted but came back to add some information rather than just letting the tread go stale.
 
I'm having the same issue as all above.

Seem to get 10.9mbit copy speeds on the E1000 driver and 11.1mbit on the VirtIO driver, with an Ubuntu 18.04.4 LTS VM - both with OpenSSH and Samba SMB, copying from/to two different Windows 10 physical machines.

Tried creating a Samba share directly on the PVE host - didn't make any difference, which suggests to me that it's an issue with PVE not the VM or how the VM is handled.

I also tried switching port on the motherboard (one is usually used for SuperMicro iLO equivalent) and it made no difference, still getting the same speeds.

Does anybody have any other ideas?

Result of "cat /etc/network/interfaces" on PVE host:

auto lo
iface lo inet loopback

iface enp9s0 inet manual

iface enp10s0 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.1.10/24
gateway 192.168.1.1
bridge-ports enp10s0
bridge-stp off
bridge-fd 0
 
I'm seeing the same thing.
I have a nfs-server as a guest. It initially started out with the default virtio network card, 1 cpu core and 1 GB RAM.
My LAN is 1 GBps all the way from router, over switches and to the guests.
With this setup I get about 150 Mbps from my HP Z440 workstation running kubuntu 20.04, and to the nfs-server with Filezilla.
The speeds I see with this setup is about 130-150 Mbps.

Just today I experimented with replacing the virtio with the e1000, adding more cores (now 4 ) to the nfs-server and got just under 200 Mbps.
Adding 8 cores I'm consistently seeing 250-300 Mbps on big transfers.

My Dell R710 host however only has 16 cores in total to share with the other guests though.
Not sure how good an idea it is to add even more cores to the nfs-server...

In any case 300 Mbps is a far cry from the speeds I've seen earlier pre-proxmox - consistent 980-1000 Mbps. :-/

Why are the network speeds so low??
 
Did some measuring with iperf.
I know GUI transfers are slower, but the speeds as measured in a terminal with not too much overhead etc aren't that great either.


Guest 4 cores, 1 GB RAM
=================

root@nfs:~ # iperf -c 192.168.100 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 128 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.100, TCP port 5001
TCP window size: 230 KByte (default)
------------------------------------------------------------
[ 5] local 192.168.0.11 port 51966 connected with 192.168.0.100 port 5001
[ 4] local 192.168.0.11 port 5001 connected with 192.168.0.100 port 59610
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 655 MBytes 549 Mbits/sec
[ 4] 0.0-10.0 sec 587 MBytes 491 Mbits/sec




Guest 8 cores, 1 GB RAM
=================

root@nfs:~ # iperf -c 192.168.100 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 128 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.100, TCP port 5001
TCP window size: 196 KByte (default)
------------------------------------------------------------
[ 5] local 192.168.0.11 port 38762 connected with 192.168.0.100 port 5001
[ 4] local 192.168.0.11 port 5001 connected with 192.168.0.100 port 58814
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 483 MBytes 404 Mbits/sec
[ 4] 0.0-10.1 sec 724 MBytes 603 Mbits/sec
 
Holy smokes!!


root@nfs:~ # iperf -c 192.168.100 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 128 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.100, TCP port 5001
TCP window size: 348 KByte (default)
------------------------------------------------------------
[ 5] local 192.168.0.11 port 34302 connected with 192.168.0.100 port 5001
[ 4] local 192.168.0.11 port 5001 connected with 192.168.0.100 port 47660
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 1.09 GBytes 937 Mbits/sec
[ 4] 0.0-10.0 sec 1.09 GBytes 930 Mbits/sec
root@nfs:~ #


I used this guide to temporarily disable hardware checksum offloading:
https://michael.mulqueen.me.uk/2018/08/disable-offloading-netplan-ubuntu/

Will look into disabling it permanently after some testing.

Thanks a bunch for setting me on the right track, gbetous!
 
I just noticed that uploads to the proxmox guest is still maxed out at about 100 Mbps.
Running ethtool -k ens18 on the guest, I then noticed the below, which I gather means that checksum offloads aren't disabled when receiving, right?

So what now?
Is there a way to disable rx-checksumming as well?

# ethtool -k ens18

Features for ens18:
rx-checksumming: on [fixed]
tx-checksumming: off
tx-checksum-ipv4: off [fixed]
tx-checksum-ip-generic: off
tx-checksum-ipv6: off [fixed]
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
...
 
That about win2012r2? Transfer not more than 50 mbit, replacing with E1000 does not help
 
Holy smokes!!


root@nfs:~ # iperf -c 192.168.100 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 128 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.100, TCP port 5001
TCP window size: 348 KByte (default)
------------------------------------------------------------
[ 5] local 192.168.0.11 port 34302 connected with 192.168.0.100 port 5001
[ 4] local 192.168.0.11 port 5001 connected with 192.168.0.100 port 47660
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 1.09 GBytes 937 Mbits/sec
[ 4] 0.0-10.0 sec 1.09 GBytes 930 Mbits/sec
root@nfs:~ #


I used this guide to temporarily disable hardware checksum offloading:
https://michael.mulqueen.me.uk/2018/08/disable-offloading-netplan-ubuntu/

Will look into disabling it permanently after some testing.

Thanks a bunch for setting me on the right track, gbetous!
Did you do that on the guest OS or proxmox?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!