so I made an upgrade to 6.2 (5.4.60-1-pve) yesterday and the outcome is the same.
also made an strace and a tcpdump. see tar.
interesting is that initial nfs session is going via the correct interface and then beginning with SECINFO_NO_NAME (acording to rfc this handles the sec between client...
was the problem always there or when did it start to appear ?
did you change something before that ?
is it your first rodeo with zfs ?
but the IO delay could also come from the NFS part isn't it ?
can you give us an example with actual numbers pl
I don't have much experience with preseed but I'm using following and this is working in my case (without any systemctl magic):
# Software Selections
tasksel tasksel/first multiselect ssh-server minimal
d-i pkgsel/include string lsof strace openssh-server...
yes we downgraded the kernel due to - https://forum.proxmox.com/threads/kernel-oops-with-kworker-getting-tainted.63116/page-2#post-299247
upgrading to the latest pve though isn't something I want to do atm
@Stoiko Ivanov thanks for your time
here you go:
root@hv-vm-01:/root# ip route
default via 10.0.100.254 dev vmbr0 onlink
10.0.11.0/25 dev enp9s0.11 proto kernel scope link src 10.0.11.3
10.0.12.0/28 dev enp1s0f0 proto kernel scope link src 10.0.12.1
10.0.100.0/24 dev vmbr0 proto kernel scope...
since upgrade to pve 6.1 (it was working fine with 6.0) we've the problem that nfs mounts are using random source ip/interfaces and not the one in the same vlan.
our current config looks like this:
pve-manager/6.1-7/13e58d5e (running kernel: 5.0.21-5-pve)
I can only speak for zfs on freebsd but I guess it's the same for linux...
that will be difficult to almost impossible because you have the txg groups and compression between it (if enabled)...but it's also not necessary from a performance point of few 'cause there won't be much to gain. zfs...
we also hit this problem:
Feb 25 01:49:57 hv-vm-01 kernel:[123814.413163] #PF: supervisor read access in kernel mode
Feb 25 01:49:57 hv-vm-01 kernel:[123814.413735] #PF: error_code(0x0000) - not-present page
Feb 25 01:49:57 hv-vm-01 kernel:[123814.414312] PGD 0 P4D 0
Feb 25 01:49:57 hv-vm-01...
basically I would always recommend zfs except when it comes to speed...you basically need to understand what and which IOPS your workload will produce and then decide if zfs will help you or fights against you.
in your case I guess it will be random IOPS with 50/50 read/write and I also guess...