Shutdown logs? (not the task viewer but the host logs on shutdown). You can easily capture that using a configured IPMI serial console.
Maybe boot without "quiet"
Oh, I've got it now, sorry. So shutdown in GUI works as expected. Hm, I don't have this issue because I explicitly shutdown important VMs before planned host shutdown.
If you do service pve-manager stop does it return immediately? That should take down the VMs and CTs.
Actually the same happens on debian, too. So it is a distribution/app problem. You can write to the creators or you can have your own custom templates with this fix in place.
Are you sure it is a hard limit at 30GB? Like, it does it at 31 but not at 30? Or it simply gets worser and worser when increasing the RAM size?
I think allocating and freeing 320GB of RAM (if qemu/KVM or the guest initializes it) is not cheap in any scenario.
I've seen that KVM indeed pegs all...
No, the dedup cache has an entry for each block, as ZFS is variable block size up to "recordsize". If all blocks are 128k then it would be nice.
Smaller blocks are worse, because their amount is higher meaning more entries in DDT.
Dedup is useless anyway, because storage is cheaper than RAM...
Your disk name has changed. I don't known enough about FreeBSD but if you've chosen a virtio disk for your VM its device name should be something like "vtbd..."
Based on what I've read until now, it seems that it happens where there is high network traffic (or many sockets open?) and the veth peer is pinned in container's namespace, so the outside peer can't be gracefully removed.
My opinion, again, is to simply ip link delete all the container's...
No, it didn't work. For now, I've added a hack, maybe somebody else can elaborate it.
In /usr/share/lxc/hooks/lxc-pve-poststop-hook I've put "ip link delete veth${vmid}i0" and it works every time.
Btw, this issue goes back to 2013, it seems...
No, I can't.
Unfortunately, I don't think a stripped down clone will help, as being such a hard to hit error, I'm afraid any change in the configuration (like the number of torrents in cache for transmission) will make it work.
I'm pretty sure that the 2nd attempt (on failed startup) to clear...
Nope, just rebooted, so everything is new (4.0-51), including kernel. Same problem. Anyway, considering that a single container is doing that and all containers have the same template, it means that a specific service is at fault.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.