it's an interesting question...
KSM only works with anonymous page cache which isn't the same as ARC which operates on file basis - see here https://www.kernel.org/doc/html/latest/admin-guide/mm/ksm.html
to be honest I don't use ZFS with proxmox but I use it with freenas and used it with...
yes you're boot_command is a little bit different then mine:
"boot_command": [
"<esc><wait>",
"install <wait>",
"preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/{{ user `preseed` }} <wait>"
...
but strange it's working for some and some not...
hmm...so just to be clear: packer creates the vm on pve node starts the vm but at the boot screen you see the boot selection menu and not the debian installer ?
it depends
if you use async for NFS then you might get an filesystem/io error for the vm's and need to repair it with fsck at the next boot if it is recoverable - depending when dirty flush happens.
with sync param you might (flush) also get io errors but filesystem will be fine at next boot...
a small update and the solution:
a few weeks ago we upgraded freenas but it was still mounting with the wrong ip/interface.
today we updated proxmox to 6.4-13 with 5.4.151-1 kernel and that solved the problem without any further adjustment.
so I guess at the end in what was a kernel bug but...
on pve 6.4 with ansible 2.7 it's working out of the box...also with command module
ansible -b -D --ask-vault-pass -vvv -m shell -a "qm resize 102 scsi0 5G" HOST
HOST | CHANGED | rc=0 >>
Size of logical volume elastic/vm-102-disk-0 changed from 4.00 GiB (1024 extents) to 5.00 GiB (1280...
ok I'm asking 'cause I also had problems with lxc, though not with this pve version, which also had somehow the same symptoms you're describing and the problem was the tcp window became full after some seconds even with a simple apt update and then traffic stalled.
how does it look on netapp or switch side ?
would it be possible to share the tcpdump from both sides ?
is it happening when you transfer files to the nfs share or also in an idle state ?
can you show output of mount | grep sto0
did you try any iperf test ?
is it only slow from inside the VM or also from the pve host ?
anything noteworthy in dmesg, syslog,etc... on pve host and sto01 ?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.