For anyone else who'll come here looking for solution: it seems that [lxc monitor] process is to blame for an undead container.
Do 'ps aux | grep [container ID]', and then kill [lxc monitor] process with -9.
All right, so I modified /etc/lvm/lvm.conf with the following filter:
global_filter = [ "r|/dev/sda*|", "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|" "r|/dev/mapper/.*-(vm|base)--[0-9]+--disk--[0-9]+|"] - after this dstat shows way less io on the drive (for me it's /dev/sda).
Then I downloaded...
Thanks for the reminder, but I have good experience with that - I haven't had a drive failed due to this, and I'm putting my drives to sleep for the past 20+ years :-)
I see that the topic gets mulled over and over again, but there's no clear solution I found.
One of my drives is used for XPenology, and it really should only be working 2-3 hours a day. Unfortunately I'm using Atom J1900, which means I have no IOMMU that could be used to pass the controller to...
My setup:
- ZFS pool on rust (spinning harddrive) named rpool (also root filesystem)
- ZFS pool on SSD named rpool-ssd
- NFS share on Synology
- I'm using ayufan's patches for diff backup
I have setup my VMs on rpool-ssd, the CTs were initially on rpool, but I migrated them.
CTs are working...
I'm not sure if this will still be useful for you, but in my case the problem has been solved by updating ZFS cache file:
zpool set cachefile=/etc/zfs/zpool.cache <tank>
I tested all the other methods listed on this forum, none of them worked.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.