Hi!
It was a bit challenging to find up-to-date information on how to get SMB3 multichannel and, especially, NFSv4 multipath (session and client ID trunking) working. I managed to sort out both on Linux (and Proxmox), so hopefully, this guide can save you some time.
Initially, my plan was to create a single tutorial covering both SMB3 multichannel and NFSv4 multipath. However, as this tutorial became longer than expected, I opted to begin with SMB3 multichannel. If there is sufficient interest, I will create another thread specifically addressing NFSv4 multipath. Despite the similarity in steps, the underlying technology is quite different, and it's probably best to keep them separate.
I'm by no means an expert, and all this information is a summary that I compiled thanks to mailing lists, forum posts, and blog entries. Feel free to suggest improvements or point out any errors you find.
	
	
		
			
	
SMB3 multichannel
If you want to learn more about the benefits and limits of SMB multichannel, I recommend checking out this Microsoft blog post.
Server side:
The functionality was introduced in Samba version 4.4 and is still highly experimental. Since I'm using TrueNAS, it was enough to add in "Services→SMB→Auxiliary Parameters" the following:
	
	
	
		
As you can see I specified both interfaces the server is using, the speed and that is RSS capable. TrueNAS has a documentation for Scale and TruenNAS Core and a great blog post. The config is the same for linux systems.
We are ready to add our smb share to proxmox:

Let's check the parameters with which Proxmox mounts the share:
	
	
	
		
The corresponding section in the /etc/pve/storage.cfg configuration file that Proxmox creates is as follows:
	
	
	
		
Let's conduct a performance test using the excellent fio tool (install it with
	
	
	
		
	
	
	
		
As expected, utilizing only one connection/network interface, it can saturate a single 1Gbps network connection.
To activate multichannel support, ensure you're using version 3.11 (which Proxmox already does in this case). Add the optional parameters
As there is currently no possibility to do it through the Proxmox GUI, you can directly modify the /etc/pve/storage.cfg with your preferred editor or use the native Proxmox tool,
	
	
	
		
The corresponding section in the /etc/pve/storage.cfg configuration file, created by Proxmox using pvesm or if you prefer to add it manually, is as follows:
	
	
	
		
To apply the new options, we need to unmount it first. This needs to be done for all mounted CIFS shares from the same server.
	
	
	
		
Wait for a moment, then execute
	
	
	
		
We can see that it correctly mounted the share with the options "multichannel" and "max_channels=4".
	
	
	
		
Let's perform another performance test with fio to confirm if we really have the throughput of 2Gbps.
	
	
	
		
	
	
	
		
Success!
	
	
		
			
If you want to verify that your Linux client is really using SMB multichannel, you can check on the server where Samba is running by looking at the open connections with
Server:
	
	
	
		
Check on the client side (proxmox node):
	
	
	
		
Lets check smb debug data:
	
	
	
		
It uses 4 channels, and both server interfaces are listed with the correct IP address and speed. You can use a bandwidth monitoring tool such as 'bmon' to see if traffic really goes through both Ethernet interfaces, both on the client and the server side.
If for some reason it does not work, you have to capture packets with tcpdump/wireshark and ensure it returns SMB2_CAP_MULTI_CHANNEL (0x00000008) in the returned capabilities in the SMB2_OP_NEGPROT packet.
To learn how to do that, you can find a troubleshooting section on the Samba website.
		
	
	
	
		
			
https://www.devadmin.it/2013/08/30/smb-3-0-smb-multichannel-e-smb-3-02/
https://old.reddit.com/r/DataHoarde..._look_how_fast_i_am_smb_multichannel_hold_my/
https://codeinsecurity.wordpress.co...y-bsd-linux-and-windows-for-20gbps-transfers/
https://docs.microsoft.com/en-us/wi...troubleshoot/smb-multichannel-troubleshooting
https://www.spinics.net/lists/linux-cifs/msg18953.html
https://blog.chaospixel.com/linux/2016/09/samba-enable-smb-multichannel-support-on-linux.html
https://github.com/Azure-Samples/azure-files-samples/tree/master/SMBDiagnostics
		
	
				
			It was a bit challenging to find up-to-date information on how to get SMB3 multichannel and, especially, NFSv4 multipath (session and client ID trunking) working. I managed to sort out both on Linux (and Proxmox), so hopefully, this guide can save you some time.
Initially, my plan was to create a single tutorial covering both SMB3 multichannel and NFSv4 multipath. However, as this tutorial became longer than expected, I opted to begin with SMB3 multichannel. If there is sufficient interest, I will create another thread specifically addressing NFSv4 multipath. Despite the similarity in steps, the underlying technology is quite different, and it's probably best to keep them separate.
I'm by no means an expert, and all this information is a summary that I compiled thanks to mailing lists, forum posts, and blog entries. Feel free to suggest improvements or point out any errors you find.
Server:
		- Hostname: truenas
 - Operating System: TrueNAS Core 13.1
 - SMB Version: 4.15.13
 - NFS Version: v4.2
 - Network Interface Cards (NICs): 2 x Intel I210 1Gbps
 
- IP: 10.0.0.4
 - Netmask: 255.255.255.0
 - Gateway: 10.0.0.1
 - DNS: 10.0.0.1
 
- IP: 10.0.5.4
 - Netmask: 255.255.255.0
 - Gateway: 10.0.0.1
 - DNS: 10.0.0.1
 
- Hostname: pve01
 - Virtualization Platform: Proxmox 8.1
 - Kernel Version: 6.5.11-7-pve
 - CIFS Version: 2.44
 - Network Interface Cards (NICs): 1 x internal Realtek 1Gbit/s NIC (driver: r8169), 1 x external USB Realtek 1Gbit/s NIC (driver: r8152)
 
- IP: 10.0.0.16
 - Netmask: 255.255.255.0
 - Gateway: 10.0.0.1
 - DNS: 10.0.0.1
 
- IP: 10.0.5.16
 - Netmask: 255.255.255.0
 - Gateway: -
 - DNS: -
 
SMB3 multichannel
1) What is SMB3 multichannel?
SMB Multichannel enables file servers to use multiple network connections simultaneously. It facilitates aggregation of network bandwidth and network fault tolerance when multiple paths are available between the SMB 3.0 client and the SMB 3.0 server. This allows server applications to take full advantage of all available network bandwidth and makes them more resilient to network failures. source
If you want to learn more about the benefits and limits of SMB multichannel, I recommend checking out this Microsoft blog post.
2) How to configure
Server side:
The functionality was introduced in Samba version 4.4 and is still highly experimental. Since I'm using TrueNAS, it was enough to add in "Services→SMB→Auxiliary Parameters" the following:
		Code:
	
	aio write size = 1
vfs objects = aio_pthread
interfaces = "10.0.0.4;speed=1000000000,capability=RSS" "10.0.5.4;speed=1000000000,capability=RSS"
	As you can see I specified both interfaces the server is using, the speed and that is RSS capable. TrueNAS has a documentation for Scale and TruenNAS Core and a great blog post. The config is the same for linux systems.
We are ready to add our smb share to proxmox:

Let's check the parameters with which Proxmox mounts the share:
		Bash:
	
	root@pve01~# mount|grep cifs
//10.0.0.4/pvebackup on /mnt/pve/nasbackupSMB type cifs (rw,relatime,vers=3.1.1,cache=strict,username=phil,domain=WORKGROUP,uid=0,noforceuid,gid=0,noforcegid,addr=10.0.0.4,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,closetimeo=1)
	The corresponding section in the /etc/pve/storage.cfg configuration file that Proxmox creates is as follows:
		Code:
	
	cifs: nasbackupSMB
            path /mnt/pve/nasbackupSMB
            server 10.0.0.4
            share pvebackup
            content images
            domain WORKGROUP
            prune-backups keep-all=1
            username phil
	Let's conduct a performance test using the excellent fio tool (install it with
apt install fio):
		Bash:
	
	root@pve01:/mnt/pve/nasbackupSMB# fio --group_reporting=1 --name=fio_test --ioengine=libaio --iodepth=16 --direct=1 --thread --rw=write --size=100M --bs=4M --numjobs=2 --time_based=1 --runtime=5m --directory=.
	
		Bash:
	
	fio_test: (g=0): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=libaio, iodepth=16
...
fio-3.33
Starting 2 threads
Jobs: 2 (f=2): [W(2)][1.7%][w=112MiB/s][w=28 IOPS][eta 04m:55s]
	As expected, utilizing only one connection/network interface, it can saturate a single 1Gbps network connection.
To activate multichannel support, ensure you're using version 3.11 (which Proxmox already does in this case). Add the optional parameters
multichannel and specify the number of channels; the maximum is 4, so include max_channels=4.As there is currently no possibility to do it through the Proxmox GUI, you can directly modify the /etc/pve/storage.cfg with your preferred editor or use the native Proxmox tool,
pvesm.
		Bash:
	
	pvesm set nasbackupSMB --options vers=3.11,multichannel,max_channels=4
	The corresponding section in the /etc/pve/storage.cfg configuration file, created by Proxmox using pvesm or if you prefer to add it manually, is as follows:
		Code:
	
	cifs: nasbackupSMB
            path /mnt/pve/nasbackupSMB
            server 10.0.0.4
            share pvebackup
            content images
            domain WORKGROUP
            options vers=3.11,multichannel,max_channels=4
            prune-backups keep-all=1
            username phil
	To apply the new options, we need to unmount it first. This needs to be done for all mounted CIFS shares from the same server.
		Bash:
	
	root@pve01:~# umount /mnt/pve/nasbackupSMB
	Wait for a moment, then execute
pvesm status to confirm that the storage has been remounted. Let's check if it has been mounted correctly:
		Code:
	
	root@pve3050:~# pvesm status
Name                Type     Status           Total            Used       Available        %
local                dir     active        51417724         6145476        42902208   11.95%
local-lvm        lvmthin     active      1953267712       124423153      1828844558    6.37%
nasbackupSMB        cifs     active      3922375568       126107292      3796268276    3.22%
	We can see that it correctly mounted the share with the options "multichannel" and "max_channels=4".
		Bash:
	
	root@pve01:/mnt/pve# mount|grep cifs
//10.0.0.4/pvebackup on /mnt/pve/nasbackupSMB type cifs (rw,relatime,vers=3.1.1,cache=strict,username=phil,domain=WORKGROUP,uid=0,noforceuid,gid=0,noforcegid,addr=10.0.0.4,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,closetimeo=1,multichannel,max_channels=4)
	Let's perform another performance test with fio to confirm if we really have the throughput of 2Gbps.
		Bash:
	
	root@pve01:/mnt/pve/nasbackupSMB# fio --group_reporting=1 --name=fio_test --ioengine=libaio --iodepth=16 --direct=1 --thread --rw=write --size=100M --bs=4M --numjobs=2 --time_based=1 --runtime=5m --directory=.
	
		Bash:
	
	fio_test: (g=0): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=libaio, iodepth=16
...
fio-3.33
Starting 2 threads
Jobs: 2 (f=2): [W(2)][2.7%][w=212MiB/s][w=53 IOPS][eta 04m:52s]
	Success!
My personal suggestion would be to first get a Windows client working (in a VM is perfectly fine), and you can at least rule out that it is a problem on the server side.If you want to verify that your Linux client is really using SMB multichannel, you can check on the server where Samba is running by looking at the open connections with
netstat or with ss.Server:
		Bash:
	
	root@truenas[/]# netstat -a|grep microsoft-ds
tcp4 0 0 truenas2.microsoft-ds 10.0.5.16.39244 ESTABLISHED
tcp4 0 0 truenas2.microsoft-ds 10.0.5.16.39228 ESTABLISHED
tcp4 0 0 truenas2.microsoft-ds 10.0.5.16.39226 ESTABLISHED
tcp4 0 0 truenas.microsoft-ds pve01.48144 ESTABLISHED
tcp4 0 0 truenas.microsoft-ds *.* LISTEN
tcp4 0 0 truenas2.microsoft-ds *.* LISTEN
	Check on the client side (proxmox node):
		Bash:
	
	root@pve01:/mnt/pve# ss -ptone '( dport = :445 )' | cat
State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess
ESTAB 0 0 10.0.5.16:39244 10.0.5.4:445 ino:62069631 sk:20d6 cgroup:/ <->
ESTAB 0 0 10.0.5.16:39226 10.0.5.4:445 ino:62072703 sk:20d7 cgroup:/ <->
ESTAB 0 0 10.0.5.16:39228 10.0.5.4:445 ino:62076953 sk:20d8 cgroup:/ <->
ESTAB 0 0 10.0.0.16:48144 10.0.0.4:445 ino:62076395 sk:20d9 cgroup:/ <->
	Lets check smb debug data:
		Bash:
	
	root@pve01:~# cat /proc/fs/cifs/DebugData
Display Internal CIFS Data Structures for Debugging
---------------------------------------------------
CIFS Version 2.44
Features: DFS,FSCACHE,SMB_DIRECT,STATS,DEBUG,ALLOW_INSECURE_LEGACY,CIFS_POSIX,UPCALL(SPNEGO),XATTR,ACL,WITNESS
CIFSMaxBufSize: 16384
Active VFS Requests: 0
Servers:
1) ConnectionId: 0x1 Hostname: 10.0.0.4
ClientGUID: 81034251-42FA-2741-A079-CA15943DA95C
Number of credits: 8190,1,1 Dialect 0x311
TCP status: 1 Instance: 1
Local Users To Server: 4 SecMode: 0x1 Req On Wire: 0 Net namespace: 4026531840
In Send: 0 In MaxReq Wait: 0
    Sessions:
    1) Address: 10.0.0.4 Uses: 1 Capability: 0x30004f    Session Status: 1
    Security type: RawNTLMSSP  SessionId: 0xc2918a05
    User: 0 Cred User: 0
    Extra Channels: 3
        Channel: 2 ConnectionId: 0x2
        Number of credits: 8190,1,1 Dialect 0x311
        TCP status: 1 Instance: 1
        Local Users To Server: 1 SecMode: 0x1 Req On Wire: 0
        In Send: 0 In MaxReq Wait: 0 Net namespace: 4026531840
        Channel: 3 ConnectionId: 0x3
        Number of credits: 8190,1,1 Dialect 0x311
        TCP status: 1 Instance: 1
        Local Users To Server: 1 SecMode: 0x1 Req On Wire: 0
        In Send: 0 In MaxReq Wait: 0 Net namespace: 4026531840
        Channel: 4 ConnectionId: 0x4
        Number of credits: 8190,1,1 Dialect 0x311
        TCP status: 1 Instance: 1
        Local Users To Server: 1 SecMode: 0x1 Req On Wire: 0
        In Send: 0 In MaxReq Wait: 0 Net namespace: 4026531840
    Shares:
    0) IPC: \\10.0.0.4\IPC$ Mounts: 1 DevInfo: 0x0 Attributes: 0x0
    PathComponentMax: 0 Status: 1 type: 0 Serial Number: 0x0
    Share Capabilities: None    Share Flags: 0x0
    tid: 0x8bfc446b    Maximal Access: 0x11f01bf
    1) \\10.0.0.4\pvebackup Mounts: 1 DevInfo: 0x20 Attributes: 0x5002f
    PathComponentMax: 255 Status: 1 type: DISK Serial Number: 0x220cd6d
    Share Capabilities: None Aligned, Partition Aligned,    Share Flags: 0x0
    tid: 0x61f36c15    Optimal sector size: 0x200    Maximal Access: 0x11f01ff
    Server interfaces: 2    Last updated: 294 seconds ago
    1)    Speed: 1Gbps
        Capabilities: rss
        IPv4: 10.0.5.4
        [CONNECTED]
    2)    Speed: 1Gbps
        Capabilities: rss
        IPv4: 10.0.0.4
    MIDs:
--
Witness registrations:
	It uses 4 channels, and both server interfaces are listed with the correct IP address and speed. You can use a bandwidth monitoring tool such as 'bmon' to see if traffic really goes through both Ethernet interfaces, both on the client and the server side.
If for some reason it does not work, you have to capture packets with tcpdump/wireshark and ensure it returns SMB2_CAP_MULTI_CHANNEL (0x00000008) in the returned capabilities in the SMB2_OP_NEGPROT packet.
To learn how to do that, you can find a troubleshooting section on the Samba website.
Some very useful resources that helped me in my journey:
https://www.devadmin.it/2013/08/30/smb-3-0-smb-multichannel-e-smb-3-02/
https://old.reddit.com/r/DataHoarde..._look_how_fast_i_am_smb_multichannel_hold_my/
https://codeinsecurity.wordpress.co...y-bsd-linux-and-windows-for-20gbps-transfers/
https://docs.microsoft.com/en-us/wi...troubleshoot/smb-multichannel-troubleshooting
https://www.spinics.net/lists/linux-cifs/msg18953.html
https://blog.chaospixel.com/linux/2016/09/samba-enable-smb-multichannel-support-on-linux.html
https://github.com/Azure-Samples/azure-files-samples/tree/master/SMBDiagnostics
			
				Last edited: 
				
		
	
										
										
											
	
										
									
								
	
	