Any reason why CIFS_SMB_DIRECT is disabled in PVE kernel's CIFS module?

Domino

Active Member
May 17, 2020
32
8
28
56
Hi, I'm homelab'ing Proxmox, and thought I'd manually create a CIFS share, works fine, but then I add the rdma parameter in the mount.cifs command, and it fails to create the share. I checked dmesg, and I see 'CIFS VFS: CONFIG_CIFS_SMB_DIRECT is not enabled'.

So it appears the CIFS module was built with the flag set to disabled.

It would be really great if this was enabled, though I appreciate some things are done on purpose. Could a lead maintainer confirm why SMB-Direct is not enabled for CIFS in the PVE kernel?

Also, thinking I could recompile the kernel with the flag set, any short tutorial sharing which sources to download and where I can set this flag to enabled to build a dev-test kernel?

Ideal world, would love to see this flag enabled, as far as I have deduced, leaving it enabled does not impact CIFS, unless someone specifically mounts a share with the rdma parameter so shouldn't impact CIFS if there are other concerns??
 
Ok, I managed to get a Kernel building and working.

Could some nice gent/lady point me to the direction of the appropriate file where I can insert the kernel config:
CONFIG_CIFS_SMB_DIRECT=y

Thank you!
 
cd into the kernel src directory and run
Code:
grep -R "CONFIG_CIFS_SMB_DIRECT" .
It will print the file containing "CONFIG_CIFS_SMB_DIRECT"

Otherwise you can also run "make menuconfig" to adjust parameters per gui
 
  • Like
Reactions: Domino
WOW! Cannot believe how much of a performance increase this has now brought to my setup.
Been comparing ceph to zfs to physical disks to locally hosted qcows and raws etc... and it seems that using a windows barebone hyper-v host with a CIFS share via SMB Direct with RDMA gives a huge increase in read/write performance for guests not to mention a huge CPU cycles saving.

Now why would the Proxmox team even have this disabled???!!!

Anyway, thanks @H4R0 !!!
 
WOW! Cannot believe how much of a performance increase this has now brought to my setup.
Been comparing ceph to zfs to physical disks to locally hosted qcows and raws etc... and it seems that using a windows barebone hyper-v host with a CIFS share via SMB Direct with RDMA gives a huge increase in read/write performance for guests not to mention a huge CPU cycles saving.

Now why would the Proxmox team even have this disabled???!!!

Anyway, thanks @H4R0 !!!

Thats true it takes a lot of testing to get the best out of your hardware with kvm/proxmox.

There are so many things that can end up with a huge bottle neck.

Your setup still sounds strange, im able to get full performance with zfs / cephs.
 
Last edited:
So now that I've confirmed the performance advantages of SMB Direct, what I would like to do is enable RDMA in the Proxmox gui itself so that I can at least keep to the codebase and just rely on patch-files instead, any idea where the code in proxmox is where it creates the cifs mounts?

Update:

Done. It was a perl script in the storage plugins folder. All working great.

Don't know if anyone is even reading this, but if anyone knows how to add a patch file in for the perl-scripts so that it patches on PVE update that would be fantastic! (if such a feature exists in the current framework that is, otherwise i guess I'll just have to knock up a patch script on the side, but would be nice if there was a standardized facility already present for perl script tweaks)
 
Last edited:
Don't know if anyone is even reading this, but if anyone knows how to add a patch file in for the perl-scripts so that it patches on PVE update that would be fantastic! (if such a feature exists in the current framework that is, otherwise i guess I'll just have to knock up a patch script on the side, but would be nice if there was a standardized facility already present for perl script tweaks)

Create a patch file, an apt-hook and a script that applies that patch. I use it for different "after-update-patch"-related problems.

Code:
DPkg::Post-Invoke { /path/to/the/script; };
 
  • Like
Reactions: Domino

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!