Hi, I use a container as SMB server connected with a 10gbe NIC. Underlying storage is a striped zpool consisting of 2x1TB HDDs added as mountpoint/subvol. Via async io and jumbo frames I get about 1GB/s from my workstation to SMB but just for a few seconds. After that speed is down to the pools HDD performance.
Is there any way to tweak this so I can sustain full speed of the NIC longer? Redundancy isn't an issue and I'm willing to lose unfinished transfers in case of power losses etc. If necessary I could commit about 100GB of RAM (maybe as ramdisk if it would work?). Read performance isn't an issue as well. Preferably I'd like to get this done without additional hardware. If absolutely necessary I could commit a Corsair Force MP510 960 GB NVME SSD. My usual filesize I'd like to push over SMB is about 50GB.
Is there any way to tweak this so I can sustain full speed of the NIC longer? Redundancy isn't an issue and I'm willing to lose unfinished transfers in case of power losses etc. If necessary I could commit about 100GB of RAM (maybe as ramdisk if it would work?). Read performance isn't an issue as well. Preferably I'd like to get this done without additional hardware. If absolutely necessary I could commit a Corsair Force MP510 960 GB NVME SSD. My usual filesize I'd like to push over SMB is about 50GB.