I finally solved the problem, I was using
writeback as KVM HDD cache. According to
this guide, when you use this type of cache:
This mode causes qemu-kvm to interact with the disk image file or block device with neither O_DSYNC nor O_DIRECT semantics, so
the host page cache is used and writes are reported to the guest as completed when placed in the host page cache, and the normal page cache management will handle commitment to the storage device. Additionally, the guest’s virtual storage adapter is informed of the writeback cache, so the guest would be expected to send down flush commands as needed to manage data integrity.
Analogous to a raid controller with RAM cache.
I highlighted the important part in bold, due to this configuration, I was spending RAM on ZFS ARC and on buffer/cache in my node, every time I started a backup or any I/O demanding operation I ended up consuming up to 36 GB on memory (20 on buffer/cache plus 16 on ARC), even if I set ARC to minimum (64MB) I wouldn’t be able to serve that demand since Proxmox plus KVM RAM demand is 8GB. Anyway, right now I set it all my KVM HDD cache to
No Cache. 3 days has passed without an RAM issue. I want to thank you both, for your help. Now I can get back to setup my Nethserver KMVs.
I'm also experimenting similar issues when coping/creating files over nfs/smb.