[TUTORIAL] Enabling SMB3 Multichannel in Proxmox

seraph

New Member
Nov 19, 2023
2
1
3
Hi!

It was a bit challenging to find up-to-date information on how to get SMB3 multichannel and, especially, NFSv4 multipath (session and client ID trunking) working. I managed to sort out both on Linux (and Proxmox), so hopefully, this guide can save you some time.

Initially, my plan was to create a single tutorial covering both SMB3 multichannel and NFSv4 multipath. However, as this tutorial became longer than expected, I opted to begin with SMB3 multichannel. If there is sufficient interest, I will create another thread specifically addressing NFSv4 multipath. Despite the similarity in steps, the underlying technology is quite different, and it's probably best to keep them separate.

I'm by no means an expert, and all this information is a summary that I compiled thanks to mailing lists, forum posts, and blog entries. Feel free to suggest improvements or point out any errors you find.


Server:
  • Hostname: truenas
  • Operating System: TrueNAS Core 13.1
  • SMB Version: 4.15.13
  • NFS Version: v4.2
  • Network Interface Cards (NICs): 2 x Intel I210 1Gbps
1st Interface:
  • IP: 10.0.0.4
  • Netmask: 255.255.255.0
  • Gateway: 10.0.0.1
  • DNS: 10.0.0.1
2nd Interface:
  • IP: 10.0.5.4
  • Netmask: 255.255.255.0
  • Gateway: 10.0.0.1
  • DNS: 10.0.0.1
Client:
  • Hostname: pve01
  • Virtualization Platform: Proxmox 8.1
  • Kernel Version: 6.5.11-7-pve
  • CIFS Version: 2.44
  • Network Interface Cards (NICs): 1 x internal Realtek 1Gbit/s NIC (driver: r8169), 1 x external USB Realtek 1Gbit/s NIC (driver: r8152)
1st Interface:
  • IP: 10.0.0.16
  • Netmask: 255.255.255.0
  • Gateway: 10.0.0.1
  • DNS: 10.0.0.1
2nd Interface:
  • IP: 10.0.5.16
  • Netmask: 255.255.255.0
  • Gateway: -
  • DNS: -

SMB3 multichannel


1) What is SMB3 multichannel?


SMB Multichannel enables file servers to use multiple network connections simultaneously. It facilitates aggregation of network bandwidth and network fault tolerance when multiple paths are available between the SMB 3.0 client and the SMB 3.0 server. This allows server applications to take full advantage of all available network bandwidth and makes them more resilient to network failures. source

If you want to learn more about the benefits and limits of SMB multichannel, I recommend checking out this Microsoft blog post.



2) How to configure


Server side:

The functionality was introduced in Samba version 4.4 and is still highly experimental. Since I'm using TrueNAS, it was enough to add in "Services→SMB→Auxiliary Parameters" the following:

Code:
aio write size = 1
vfs objects = aio_pthread
interfaces = "10.0.0.4;speed=1000000000,capability=RSS" "10.0.5.4;speed=1000000000,capability=RSS"

As you can see I specified both interfaces the server is using, the speed and that is RSS capable. TrueNAS has a documentation for Scale and TruenNAS Core and a great blog post. The config is the same for linux systems.

We are ready to add our smb share to proxmox:

1704563184099.png

Let's check the parameters with which Proxmox mounts the share:

Bash:
root@pve01~# mount|grep cifs

//10.0.0.4/pvebackup on /mnt/pve/nasbackupSMB type cifs (rw,relatime,vers=3.1.1,cache=strict,username=phil,domain=WORKGROUP,uid=0,noforceuid,gid=0,noforcegid,addr=10.0.0.4,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,closetimeo=1)

The corresponding section in the /etc/pve/storage.cfg configuration file that Proxmox creates is as follows:

Code:
cifs: nasbackupSMB
            path /mnt/pve/nasbackupSMB
            server 10.0.0.4
            share pvebackup
            content images
            domain WORKGROUP
            prune-backups keep-all=1
            username phil

Let's conduct a performance test using the excellent fio tool (install it with apt install fio):

Bash:
root@pve01:/mnt/pve/nasbackupSMB# fio --group_reporting=1 --name=fio_test --ioengine=libaio --iodepth=16 --direct=1 --thread --rw=write --size=100M --bs=4M --numjobs=2 --time_based=1 --runtime=5m --directory=.

Bash:
fio_test: (g=0): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=libaio, iodepth=16
...
fio-3.33
Starting 2 threads
Jobs: 2 (f=2): [W(2)][1.7%][w=112MiB/s][w=28 IOPS][eta 04m:55s]

As expected, utilizing only one connection/network interface, it can saturate a single 1Gbps network connection.

To activate multichannel support, ensure you're using version 3.11 (which Proxmox already does in this case). Add the optional parameters multichannel and specify the number of channels; the maximum is 4, so include max_channels=4.

As there is currently no possibility to do it through the Proxmox GUI, you can directly modify the /etc/pve/storage.cfg with your preferred editor or use the native Proxmox tool, pvesm.

Bash:
pvesm set nasbackupSMB --options vers=3.11,multichannel,max_channels=4

The corresponding section in the /etc/pve/storage.cfg configuration file, created by Proxmox using pvesm or if you prefer to add it manually, is as follows:

Code:
cifs: nasbackupSMB
            path /mnt/pve/nasbackupSMB
            server 10.0.0.4
            share pvebackup
            content images
            domain WORKGROUP
            options vers=3.11,multichannel,max_channels=4
            prune-backups keep-all=1
            username phil

To apply the new options, we need to unmount it first. This needs to be done for all mounted CIFS shares from the same server.

Bash:
root@pve01:~# umount /mnt/pve/nasbackupSMB

Wait for a moment, then execute pvesm status to confirm that the storage has been remounted. Let's check if it has been mounted correctly:

Code:
root@pve3050:~# pvesm status
Name                Type     Status           Total            Used       Available        %
local                dir     active        51417724         6145476        42902208   11.95%
local-lvm        lvmthin     active      1953267712       124423153      1828844558    6.37%
nasbackupSMB        cifs     active      3922375568       126107292      3796268276    3.22%

We can see that it correctly mounted the share with the options "multichannel" and "max_channels=4".

Bash:
root@pve01:/mnt/pve# mount|grep cifs

//10.0.0.4/pvebackup on /mnt/pve/nasbackupSMB type cifs (rw,relatime,vers=3.1.1,cache=strict,username=phil,domain=WORKGROUP,uid=0,noforceuid,gid=0,noforcegid,addr=10.0.0.4,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,closetimeo=1,multichannel,max_channels=4)

Let's perform another performance test with fio to confirm if we really have the throughput of 2Gbps.

Bash:
root@pve01:/mnt/pve/nasbackupSMB# fio --group_reporting=1 --name=fio_test --ioengine=libaio --iodepth=16 --direct=1 --thread --rw=write --size=100M --bs=4M --numjobs=2 --time_based=1 --runtime=5m --directory=.

Bash:
fio_test: (g=0): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=libaio, iodepth=16
...
fio-3.33
Starting 2 threads
Jobs: 2 (f=2): [W(2)][2.7%][w=212MiB/s][w=53 IOPS][eta 04m:52s]

Success!


My personal suggestion would be to first get a Windows client working (in a VM is perfectly fine), and you can at least rule out that it is a problem on the server side.

If you want to verify that your Linux client is really using SMB multichannel, you can check on the server where Samba is running by looking at the open connections with netstat or with ss.

Server:
Bash:
root@truenas[/]# netstat -a|grep microsoft-ds
tcp4 0 0 truenas2.microsoft-ds 10.0.5.16.39244 ESTABLISHED
tcp4 0 0 truenas2.microsoft-ds 10.0.5.16.39228 ESTABLISHED
tcp4 0 0 truenas2.microsoft-ds 10.0.5.16.39226 ESTABLISHED
tcp4 0 0 truenas.microsoft-ds pve01.48144 ESTABLISHED
tcp4 0 0 truenas.microsoft-ds *.* LISTEN
tcp4 0 0 truenas2.microsoft-ds *.* LISTEN

Check on the client side (proxmox node):

Bash:
root@pve01:/mnt/pve# ss -ptone '( dport = :445 )' | cat
State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess
ESTAB 0 0 10.0.5.16:39244 10.0.5.4:445 ino:62069631 sk:20d6 cgroup:/ <->
ESTAB 0 0 10.0.5.16:39226 10.0.5.4:445 ino:62072703 sk:20d7 cgroup:/ <->
ESTAB 0 0 10.0.5.16:39228 10.0.5.4:445 ino:62076953 sk:20d8 cgroup:/ <->
ESTAB 0 0 10.0.0.16:48144 10.0.0.4:445 ino:62076395 sk:20d9 cgroup:/ <->

Lets check smb debug data:

Bash:
root@pve01:~# cat /proc/fs/cifs/DebugData
Display Internal CIFS Data Structures for Debugging
---------------------------------------------------
CIFS Version 2.44
Features: DFS,FSCACHE,SMB_DIRECT,STATS,DEBUG,ALLOW_INSECURE_LEGACY,CIFS_POSIX,UPCALL(SPNEGO),XATTR,ACL,WITNESS
CIFSMaxBufSize: 16384
Active VFS Requests: 0

Servers:
1) ConnectionId: 0x1 Hostname: 10.0.0.4
ClientGUID: 81034251-42FA-2741-A079-CA15943DA95C
Number of credits: 8190,1,1 Dialect 0x311
TCP status: 1 Instance: 1
Local Users To Server: 4 SecMode: 0x1 Req On Wire: 0 Net namespace: 4026531840
In Send: 0 In MaxReq Wait: 0

    Sessions:
    1) Address: 10.0.0.4 Uses: 1 Capability: 0x30004f    Session Status: 1
    Security type: RawNTLMSSP  SessionId: 0xc2918a05
    User: 0 Cred User: 0

    Extra Channels: 3

        Channel: 2 ConnectionId: 0x2
        Number of credits: 8190,1,1 Dialect 0x311
        TCP status: 1 Instance: 1
        Local Users To Server: 1 SecMode: 0x1 Req On Wire: 0
        In Send: 0 In MaxReq Wait: 0 Net namespace: 4026531840

        Channel: 3 ConnectionId: 0x3
        Number of credits: 8190,1,1 Dialect 0x311
        TCP status: 1 Instance: 1
        Local Users To Server: 1 SecMode: 0x1 Req On Wire: 0
        In Send: 0 In MaxReq Wait: 0 Net namespace: 4026531840

        Channel: 4 ConnectionId: 0x4
        Number of credits: 8190,1,1 Dialect 0x311
        TCP status: 1 Instance: 1
        Local Users To Server: 1 SecMode: 0x1 Req On Wire: 0
        In Send: 0 In MaxReq Wait: 0 Net namespace: 4026531840

    Shares:
    0) IPC: \\10.0.0.4\IPC$ Mounts: 1 DevInfo: 0x0 Attributes: 0x0
    PathComponentMax: 0 Status: 1 type: 0 Serial Number: 0x0
    Share Capabilities: None    Share Flags: 0x0
    tid: 0x8bfc446b    Maximal Access: 0x11f01bf

    1) \\10.0.0.4\pvebackup Mounts: 1 DevInfo: 0x20 Attributes: 0x5002f
    PathComponentMax: 255 Status: 1 type: DISK Serial Number: 0x220cd6d
    Share Capabilities: None Aligned, Partition Aligned,    Share Flags: 0x0
    tid: 0x61f36c15    Optimal sector size: 0x200    Maximal Access: 0x11f01ff


    Server interfaces: 2    Last updated: 294 seconds ago
    1)    Speed: 1Gbps
        Capabilities: rss
        IPv4: 10.0.5.4
        [CONNECTED]

    2)    Speed: 1Gbps
        Capabilities: rss
        IPv4: 10.0.0.4


    MIDs:
--

Witness registrations:

It uses 4 channels, and both server interfaces are listed with the correct IP address and speed. You can use a bandwidth monitoring tool such as 'bmon' to see if traffic really goes through both Ethernet interfaces, both on the client and the server side.

If for some reason it does not work, you have to capture packets with tcpdump/wireshark and ensure it returns SMB2_CAP_MULTI_CHANNEL (0x00000008) in the returned capabilities in the SMB2_OP_NEGPROT packet.

To learn how to do that, you can find a troubleshooting section on the Samba website.

 
Last edited:
  • Like
Reactions: Bob.Dig
Code:
1st Interface:
IP: 10.0.0.4
Netmask: 255.255.255.0
Gateway: 10.0.0.1
DNS: 10.0.0.1

2nd Interface:
IP: 10.0.5.4
Netmask: 255.255.255.0
Gateway: 10.0.0.1
DNS: 10.0.0.1

but for multi channel you need all interfaces in one network, eg IP: 10.0.0.4/255.255.255.0 and IP: 10.0.0.5/255.255.255.0 for server and 10.0.0.16/255.255.255.0 and 10.0.0.17/255.255.255.0 for client



https://www.reddit.com/r/homelab/co...urce=share&utm_medium=ios_app&utm_name=iossmf
 
Code:
1st Interface:
IP: 10.0.0.4
Netmask: 255.255.255.0
Gateway: 10.0.0.1
DNS: 10.0.0.1

2nd Interface:
IP: 10.0.5.4
Netmask: 255.255.255.0
Gateway: 10.0.0.1
DNS: 10.0.0.1

but for multi channel you need all interfaces in one network, eg IP: 10.0.0.4/255.255.255.0 and IP: 10.0.0.5/255.255.255.0 for server and 10.0.0.16/255.255.255.0 and 10.0.0.17/255.255.255.0 for client



https://www.reddit.com/r/homelab/co...urce=share&utm_medium=ios_app&utm_name=iossmf

This is true for Windows Server. As far as I know, when using Samba, the only validated and recommended way is to use a different subnet. I see no advantage in not following the recommendation.
 
How can I implement SMB multichannel for virtual machines under Proxmox Virtual Environment (PVE)? Even after simulating three network cards, the transfer speed is still limited to 1G. Are there any methods to improve this situation?

Due to support for LACP in my switch, NAS, and PC, I have bonded six network cards into a bond0 at the underlying layer of Proxmox Virtual Environment (PVE). While creating a virtual machine, I added three virtual network cards pointing to bond0, but it seems that SMB multichannel is not being effective. Any suggestions or methods to ensure proper functioning of SMB multichannel?

Code:
#nano -w /etc/network/interfaces


auto lo
iface lo inet loopback


auto eno1
iface eno1 inet manual


auto enx00e04c6809e6
iface enx00e04c6809e6 inet manual


auto enp2s0f0
iface enp2s0f0 inet manual


auto enp2s0f1
iface enp2s0f1 inet manual


auto enp2s0f2
iface enp2s0f2 inet manual


auto enp2s0f3
iface enp2s0f3 inet manual


auto enp113s0
iface enp113s0 inet manual


iface wlp4s0 inet manual


auto bond0
iface bond0 inet manual
    bond-slaves eno1 enp113s0 enp2s0f0 enp2s0f1 enp2s0f2 enp2s0f3 enx00e04c6809e6
    bond-miimon 100
    bond-mode 802.3ad
    bond-xmit-hash-policy layer2+3


auto vmbr0
iface vmbr0 inet manual
    address 192.168.168.10/24
    gateway 192.168.168.254
    bridge-ports bond0
    bridge-stp on
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094


source /etc/network/interfaces.d/*
 
Hi,

Not necessary related to multichannel but in your bond0 change the hash-policy to layer3+4, this will try to balance the traffic based on ip+port but its only for the upload traffic, for the download traffic try to check if your switch support another hash policy for the portchannel.
 
I transitioned from ESXi 8.0 to Proxmox Virtual Environment (PVE). In the same hardware environment and configuration, I achieved high-performance SMB multichannel with three virtualized network cards on ESXi. However, I've noticed differences in PVE, and I'm currently exploring and trying to understand these variances as I haven't been able to replicate the same level of performance. I'm still in the process of figuring things out and hope to find a solution.

1708692987925.png
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!