[SOLVED] USB 2.5GBE Adapter only 110MB/s on pve (direct), but 250MB/s on a VM on located on that host (pve)

dreamworks

Member
Aug 16, 2023
31
7
8
Hey,

I created a local samba share on the host but it seems to be capped at 110MB/s

The 2.5gb adapter runs at 250MB/s when connected to a vm but when directly accessing the samba share on that 2.5gbe adapter I only get 110MB/s

samba config:

Code:
[global]
   socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=131072 SO_SNDBUF=131072
   write cache size = 524288
   read raw = yes
   write raw = yes
   min receivefile size = 16384
   use sendfile = true
   aio read size = 16384
   aio write size = 16384
   aio write behind = true
   async smb echo handler = yes
   max xmit = 65535
   getwd cache = yes
   strict locking = no
   disable netbios = yes
   smb encrypt = off
   server signing = auto
   server max protocol = SMB3
interfaces = 192.168.0.34/24
bind interfaces only = yes

[Films]
   path = /mnt/pve/files/Films
   valid users = root
   read only = no
   browsable = yes
   writable = yes
   create mask = 0700
   directory mask = 0700
   force user = root
   force group = root

[TV Shows]
   path = /mnt/pve/files/TV
   valid users = root
   read only = no
   browsable = yes
   writable = yes
   create mask = 0700
   directory mask = 0700
   force user = root
   force group = root

[Proxmox]
   path = /mnt/pve/files
   valid users = root
   read only = no
   browsable = yes
   writable = yes
   create mask = 0700
   directory mask = 0700
   force user = root
   force group = root
 
local drive speed test, running on an nvme


root@pve:/mnt/pve/files# dd if=/dev/zero of=/mnt/pve/files/testfile bs=1G count=10 oflag=direct

10737418240 bytes (11 GB, 10 GiB) copied, 3.05714 s, 3.5 GB/s
 
You should not defined e1000* ethernet card in your VM and use virtio card !
Change global section as here, restart smb and try again:
[global]
#socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=131072 SO_SNDBUF=131072
#write cache size = 524288
read raw = yes
write raw = yes
#min receivefile size = 16384
use sendfile = true
#aio read size = 16384
#aio write size = 16384
aio write behind = true
async smb echo handler = yes
#max xmit = 65535
max xmit = 131072
max stat cache size = 8192
getwd cache = yes
strict locking = no
disable netbios = yes
smb encrypt = off
server signing = auto
server min protocol = SMB2
server max protocol = SMB3
interfaces = 192.168.0.34/24
bind interfaces only = yes
...
 
Last edited:
For a correct comparison; you'd need to try a Samba share on the VM. If this is indeed the case, ignore this comment.
works fine in a VM, against that same network interface
 

Attachments

  • sambatest.png
    sambatest.png
    52.3 KB · Views: 19
Last edited:
Your buffer size is defined as 131072, try increasing that for 2.5Gbit
still maxing out at 100MB/s

[global]
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=1048576 SO_SNDBUF=1048576
write cache size = 524288
read raw = yes
write raw = yes
#min receivefile size = 16384
use sendfile = true
#aio read size = 16384
#aio write size = 16384
aio write behind = true
async smb echo handler = yes
#max xmit = 65535
max xmit = 131072
max stat cache size = 8192
getwd cache = yes
strict locking = no
disable netbios = yes
smb encrypt = off
server signing = auto
server min protocol = SMB2
server max protocol = SMB3
interfaces = 192.168.0.34/24
bind interfaces only = yes
 
If you shut down all VM/s LXC/s that use that NW interface, does the host SMB speed increase?
 
Out-comment socket options as they are 20years old tips and are kernel auto-tuned for long ! Even so don't set write cache size anymore.
Check bandwidth of your 2.5Gb network without samba (eg. with multithreaded iperf2 to reach >2Gb), between 2 real hosts as perhaps there's a bridge etc problem in. Propably your other client has only 1Gb ?
PS: Your samba on the host (for VM and client) would run even faster if you define "log level = 0" (or 1) instead of default 2.
 
Last edited:
  • Like
Reactions: Kingneutron
Out-comment socket options as they are 20years old tips and are kernel auto-tuned for long ! Even so don't set write cache size anymore.
Check bandwidth of your 2.5Gb network without samba (eg. with multithreaded iperf2 to reach >2Gb), between 2 real hosts as perhaps there's a bridge etc problem in. Propably your other client has only 1Gb ?
PS: Your samba on the host (for VM and client) would run even faster if you define "log level = 0" (or 1) instead of default 2.
other client 2.5gbe also

see here

https://forum.proxmox.com/threads/local-samba-share-capped-at-110mb-s.152693/post-693284
 
Check your bandwidth in both directions of your 2 physical hosts of your 2.5Gb network without samba.
 
see here, the same 2.5gb client
So just to clarify that "2.5gb client" is a physical SEPARATE node/client, NOT contained physically within the PVE host SMB you are checking. Because if it is not, I would believe that your SMB bandwidth will be limited by the actual HV/guest NW traffic using that same NIC.
 
So just to clarify that "2.5gb client" is a physical SEPARATE node/client, NOT contained physically within the PVE host SMB you are checking. Because if it is not, I would believe that your SMB bandwidth will be limited by the actual HV/guest NW traffic using that same NIC.
yes correct

also i just found the problem the nic is running at half duplex on the host due to the driver r8152 does anyone know how to update it?
the vm configured is using e1000e and that works at 2.5gbe lol
 
  • Like
Reactions: Kingneutron
also i just found the problem the nic is running at half duplex on the host due to the driver r8152 does anyone know how to update it?
the vm configured is using e1000e and that works at 2.5gbe lol
If both the VM & the host are actually using the same NIC, then I don't see how you believe you've actually discovered the problem, since the VM through the VMBR is in fact still using the same host connection on that NIC.
 
  • Like
Reactions: dreamworks
If both the VM & the host are actually using the same NIC, then I don't see how you believe you've actually discovered the problem, since the VM through the VMBR is in fact still using the same host connection on that NIC.
yes but the host is using a driver that has half duplex and the vm is using a driver with full duplex
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!