VM blocked due to hung_task_timeout_secs

marsian

Active Member
Sep 27, 2016
55
5
28
Hi, did someone found a final solution on this yet? We experienced the same delay problems, which even lead to turning the affected disks within the VM into read-only mode plus some additional journaling errors that caused file system errors....?

We're on proxmox-ve: 4.4-82 (running kernel: 4.4.40-1-pve)...
 

gkovacs

Well-Known Member
Dec 22, 2008
509
48
48
Budapest, Hungary
Hi, did someone found a final solution on this yet? We experienced the same delay problems, which even lead to turning the affected disks within the VM into read-only mode plus some additional journaling errors that caused file system errors....?

We're on proxmox-ve: 4.4-82 (running kernel: 4.4.40-1-pve)...

We have found that the following settings - while do not solve the problem completely - considerably lessen their impact. All of these settings are for the Proxmox hosts.

1. Linux virtual memory subsystem tuning

vm.dirty_ratio and vm.dirty_background_ratio
You need to lower these considerably from the default values. Purpose is to lessen the IO blocking effect that happens when processes reach their dirty page cache limit in memory and the kernel starts to write them out. Add the following lines to /etc/sysctl.conf
Code:
vm.dirty_ratio=5
vm.dirty_background_ratio=1

vm.min_free_kbytes
You need to increase vm.min_free_kbytes from the Debian default value to about 128M for every 16GB of RAM you have in your server. So choose one of the following lines and add it to your /etc/sysctl.conf
Code:
vm.min_free_kbytes=131072     # for servers under 16GB of RAM
vm.min_free_kbytes=262144     # for servers between 16GB-32GB RAM
vm.min_free_kbytes=393216     # for servers between 32GB-48GB RAM
vm.min_free_kbytes=524288     # for servers above 48GB RAM

vm.swappiness
Swapping out on the host can also cause temporary IO blocking of guests, so you need to limit it while not disabling swapping completely. Add the following line to /etc/sysctl.conf
Code:
vm.swappiness=1

After adding these, don't forget to run sysctl -p (or reboot).

2. ZFS swap tuning
You should absolutely use these settings for system stability if your swap is on a ZFS ZVOL (default installation places it there):
Code:
zfs set primarycache=metadata rpool/swap
zfs set secondarycache=metadata rpool/swap
zfs set compression=zle rpool/swap
zfs set checksum=off rpool/swap
zfs set sync=always rpool/swap
zfs set logbias=throughput rpool/swap
 
Last edited:
  • Like
Reactions: Symbol

marsian

Active Member
Sep 27, 2016
55
5
28
Well, finally we almost solved the issue by increasing the network capacity for backups, espcecially by adding additional RAM on the storage side, so now we can cache way more data and Proxmox can send "at full throttle" data to it without having to wait for write operations. It still happens on very rare conditions, but given the amount of VMs and the frequency it happens we're fine with the current situation.
 
  • Like
Reactions: gkovacs

gkovacs

Well-Known Member
Dec 22, 2008
509
48
48
Budapest, Hungary
Well, finally we almost solved the issue by increasing the network capacity for backups, espcecially by adding additional RAM on the storage side, so now we can cache way more data and Proxmox can send "at full throttle" data to it without having to wait for write operations. It still happens on very rare conditions, but given the amount of VMs and the frequency it happens we're fine with the current situation.

The problem was always less serious with backups, especially if you applied the tweaks I posted above... some VMs were more susceptible (Debian 7), some were not at all (Ubuntu 14/15/16.04, Debian 9). But the real problem was always with restores and migrations: try to restore (or migrate) some big VM to local storage while you have active web or application serving VMs running, and you will see a lot of these erors on the consoles (these screengrabs are fresh, taken today).

Debian 6, IDE qcow2 on ZFS
debian6-ide.jpg


Debian 7, Virtio qcow2 on ZFSdebian7-virtio.jpg


Debian 7, Virtio qcow2 on ZFSdebian7-virtio-2.jpg

Of course the same thing happens with IDE, Virtio and Virtio-SCSI interfaces, only the console errors are different. Network connections are disrupted, tasks are blocked or hung, sometimes even the kernel freaks out. This is a QEMU / KVM / kernel issue, and no one seems to acknowledge it, even the big companies like Red Hat are only posting mitigation strategies like this was a side effect of using KVM. Weird thing is that not even the Proxmox developers ackowledged this as a real problem, despite that fact that many of us are reporting these issues for years.

Here is my bugreport on the Proxmox bugzilla:
https://bugzilla.proxmox.com/show_bug.cgi?id=1453
 
  • Like
Reactions: marsian

marsian

Active Member
Sep 27, 2016
55
5
28
That's true, luckily we did'nt had to do much restore in recent times, but I'll add these tunings to our next maintenance window...

Hopefully the bug report can get some attention by the Proxmox team!

Did someone who is affected too did some experients with RAID Controller Cache sizes yet? So far we tried only up to 4GB ones, but biven you can bump some of them much higher (16/32GB an more), this could also solve the issue probably, as the Caching would buffer the interfaces to the harddisks...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!