Sure, but same problems when NFS uplink is "full". We always use LZO compression.Did you experiment with the throttle settings in your NFS setup?
Sure, but same problems when NFS uplink is "full". We always use LZO compression.Did you experiment with the throttle settings in your NFS setup?
Hi, did someone found a final solution on this yet? We experienced the same delay problems, which even lead to turning the affected disks within the VM into read-only mode plus some additional journaling errors that caused file system errors....?
We're on proxmox-ve: 4.4-82 (running kernel: 4.4.40-1-pve)...
vm.dirty_ratio=5
vm.dirty_background_ratio=1
vm.min_free_kbytes=131072 # for servers under 16GB of RAM
vm.min_free_kbytes=262144 # for servers between 16GB-32GB RAM
vm.min_free_kbytes=393216 # for servers between 32GB-48GB RAM
vm.min_free_kbytes=524288 # for servers above 48GB RAM
vm.swappiness=1
zfs set primarycache=metadata rpool/swap
zfs set secondarycache=metadata rpool/swap
zfs set compression=zle rpool/swap
zfs set checksum=off rpool/swap
zfs set sync=always rpool/swap
zfs set logbias=throughput rpool/swap
Did you test it? Mind to share it here?Great feedback, thank you very much! We'll test these settings in our environment![]()
Well, finally we almost solved the issue by increasing the network capacity for backups, espcecially by adding additional RAM on the storage side, so now we can cache way more data and Proxmox can send "at full throttle" data to it without having to wait for write operations. It still happens on very rare conditions, but given the amount of VMs and the frequency it happens we're fine with the current situation.
We use essential cookies to make this site work, and optional cookies to enhance your experience.