So I have this weird problem where downloading via sftp from my VMs on a 1 GbE network seems to drop to around 48MB/s from 54MB/s instead of the usual 100-111MB/s. All VMs run Debian Trixie and are configured like this:
So far I've ruled out any issues with NFS or Virtiofs, because I have been writing to
I also don't have an issue when connecting to the pvehost directly and writing to /root
It's also not a storage device issue, because if I connect to one VM that is on vmbr0 and another VM on the same bridge I am able to get about 280MB/s which is more than even 1 GbE connection would provide.
Therefore I don't think it's a resource issue or hardware issue. Both physical network interfaces are are Intel I350 interfaces bonded together like so:
I also tried iperf on the host and guest VMs and host and didn't see see much difference. For example from VM:
virtual machine:
and from the pve host:
For uploading/downloading I created some random data:
Uploading to VM seems fine:
Downloading from VM does not:
So in summary it seems:
So I'm out of ideas of what to test next. I also didn't see anything concerning in dmesg on the pvehost.
Memory | 16.00 GiB |
Processors | 4 (1 sockets, 4 cores) [host] |
BIOS | OVMF (UEFI) |
Machine | q35 |
SCSI Controller | VirtIO SCSI single |
Hard Disk (scsi0) | pve-data:vm-1000-disk-1,aio=native,discard=on,iothread=1,size=32G |
EFI Disk | pve-data:vm-1000-disk-0,pre-enrolled-keys=0,sizse=1M |
So far I've ruled out any issues with NFS or Virtiofs, because I have been writing to
scsi0
which is directly attached to the VM (as above) and been able to reproduce the issue.I also don't have an issue when connecting to the pvehost directly and writing to /root
It's also not a storage device issue, because if I connect to one VM that is on vmbr0 and another VM on the same bridge I am able to get about 280MB/s which is more than even 1 GbE connection would provide.
Therefore I don't think it's a resource issue or hardware issue. Both physical network interfaces are are Intel I350 interfaces bonded together like so:
Code:
auto bond0
iface bond0 inet static
bond-slaves eno1 eno2
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
auto vmbr0
iface vmbr0 inet manual
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 50-60
auto vmbr0.50
iface vmbr0.50 inet static
address 192.168.50.253/24
gateway 192.168.50.1
I also tried iperf on the host and guest VMs and host and didn't see see much difference. For example from VM:
virtual machine:
Code:
iperf -s -i 2 -t 30
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 128 KByte (default)
------------------------------------------------------------
[ 1] local 192.168.51.40 port 5001 connected with 192.168.31.20 port 58006
[ ID] Interval Transfer Bandwidth
[ 1] 0.0000-2.0000 sec 222 MBytes 932 Mbits/sec
[ 1] 2.0000-4.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 4.0000-6.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 6.0000-8.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 8.0000-10.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 10.0000-12.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 12.0000-14.0000 sec 224 MBytes 939 Mbits/sec
[ 1] 14.0000-16.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 16.0000-18.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 18.0000-20.0000 sec 224 MBytes 939 Mbits/sec
[ 1] 20.0000-22.0000 sec 224 MBytes 939 Mbits/sec
[ 1] 22.0000-24.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 24.0000-26.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 26.0000-28.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 28.0000-30.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 0.0000-30.0006 sec 3.28 GBytes 939 Mbits/sec
and from the pve host:
Code:
root@pve:~# iperf -s -i 2 -t 30
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 128 KByte (default)
------------------------------------------------------------
[ 1] local 192.168.50.253 port 5001 connected with 192.168.31.20 port 52448
[ ID] Interval Transfer Bandwidth
[ 1] 0.0000-2.0000 sec 222 MBytes 933 Mbits/sec
[ 1] 2.0000-4.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 4.0000-6.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 6.0000-8.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 8.0000-10.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 10.0000-12.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 12.0000-14.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 14.0000-16.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 16.0000-18.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 18.0000-20.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 20.0000-22.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 22.0000-24.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 24.0000-26.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 26.0000-28.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 28.0000-30.0000 sec 224 MBytes 940 Mbits/sec
[ 1] 0.0000-30.0009 sec 3.28 GBytes 939 Mbits/sec
For uploading/downloading I created some random data:
Code:
dd if=/dev/urandom of=random_file_20GB bs=1M count=20480 status=progress
Uploading to VM seems fine:
Code:
Uploading random_file_20GB to /home/<user>/random_file_20GB
random_file_20GB 12% 2520MB 108.8MB/s 02:45 ETA
random_file_20GB 66% 13GB 110.3MB/s 01:02 ETA
random_file_20GB 89% 18GB 110.2MB/s 00:18 ETA
Downloading from VM does not:
Code:
Fetching /home/<user>/random_file_20GB to random_file_20GB
random_file_20GB 2% 442MB 51.5MB/s 06:29 ETA
random_file_20GB 18% 3754MB 49.3MB/s 05:39 ETA
random_file_20GB 42% 8619MB 45.2MB/s 04:22 ETA
random_file_20GB 80% 16GB 46.6MB/s 01:24 ETA
random_file_20GB 90% 18GB 47.4MB/s 00:41 ETA
So in summary it seems:
- Not related with physical hardware (pvehost is fine)
- Not related to NFS or Virtiofs (happens on scsi0 which is on two NVMes)
- Not related to VM RAM or cpu limitations (cross VM network is actually faster)
- Uploading to VM seems fine.
So I'm out of ideas of what to test next. I also didn't see anything concerning in dmesg on the pvehost.
Last edited: