I must say the iostats from the zpool are abysmal.
just that you've something to compare:
I've a 3x sandisk ultra 3d ssd (also consumer) in a mirrored zpool and this is what I get from fio.
Run status group 0 (all jobs):
WRITE: bw=672MiB/s (704MB/s), 83.0MiB/s-197MiB/s...
ok I thought on zfs on linux (I'm on freebsd) you would see the vol/recordsize of the zpool but I looks like not.
as the others already suggested check your vol/recordsize and the default 128kb should be fine.
I'm in general rather reluctant to zfs benchmarks 'cause of the magic...
for random IO you shouldn't use raidz1 or worse raidz2. use mirrored vdev. there're already multiple answers to this in the forum - just search it.
if you want to improve sequential IO throughput you could use raidz1 but this of course depends on your IOPS workload.
I would start without L2ARC...
so I made an upgrade to 6.2 (5.4.60-1-pve) yesterday and the outcome is the same.
also made an strace and a tcpdump. see tar.
interesting is that initial nfs session is going via the correct interface and then beginning with SECINFO_NO_NAME (acording to rfc this handles the sec between client...
was the problem always there or when did it start to appear ?
did you change something before that ?
is it your first rodeo with zfs ?
but the IO delay could also come from the NFS part isn't it ?
can you give us an example with actual numbers pl
I don't have much experience with preseed but I'm using following and this is working in my case (without any systemctl magic):
# Software Selections
tasksel tasksel/first multiselect ssh-server minimal
d-i pkgsel/include string lsof strace openssh-server...