Sure, next time this occurs. By the way, thanks for all the support without me being marked as subscriber (i do take care of some companies which i bought subscriptions for, but this account is my private one ;) )
System load (I/O, as this is a fileserver ;) ) looks high in this case
But we also encounter this across the whole cluster on systems with low load an random containers.
Mostly affected containers are:
- postfix relay
- samba domain controller (plus dhcp server)
- samba file server
-...
# gdb
GNU gdb (Debian 7.12-6) 7.12.0.20161007-git
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by...
Not sure if these AppArmor stuff is involved:
Aug 22 15:42:53 proxmox3 pct[5934]: <root@pam> starting task UPID:proxmox3:00001730:01085DB5:599C34DD:vzstart:109:root@pam:
Aug 22 15:42:53 proxmox3 pct[5936]: starting CT 109: UPID:proxmox3:00001730:01085DB5:599C34DD:vzstart:109:root@pam:
Aug 22...
Hm, basically the same, somehow containers seems to be running, but don't do anything. Increase in cron pids is a symptom which makes it easy for us to know if they are hanging. We do have this since https://forum.proxmox.com/threads/pve-suddunly-stopped-working-all-cts-unrecheable.26458/
It...
Picking up on this....Still have this randomly on different hosts.
Latest backtrace:
#0 0x00007fbc42ebb536 in futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x7ffdc1090c00) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
#1 do_futex_wait...
You're mixing up different things. ZFS 0.7.0 does not refer to this:
there were some bugs in some edge cases where installing was not possible, as well as some hardware compatibility fixes.
Ok, and the warning line differs to another machine still running pre-update:
Pre-update:
# systemctl status cryptsetup.target
● cryptsetup.target - Encrypted Volumes
Loaded: loaded (/lib/systemd/system/cryptsetup.target; static; vendor preset: enabled)
Active: active since Wed...
Ah, clearly too late ;) The pool consists of two 4TB drives (devicemapper names 4tb1 and 4tb2)
# journalctl -b -u "*cryptsetup*" -u "zfs*" -u "local-fs.target" -u "pve-manager"
-- Logs begin at Tue 2017-08-08 10:04:14 CEST, end at Tue 2017-08-08 13:17:21 CEST. --
Aug 08 10:04:15 test-pve...
It looks like the devices for the pools are there too late (these are luks crypted device mapper disks). Last bunch of updates was related to lvm and devmapper, probably something changed there so the volumes are not present while zfs tries to import them.
Start-Date: 2017-08-02 20:51:41...
Ok, somehow, the pools are being available too late now (which wasn't the case before) and PVE already tries to start containers while the subvol is not there. But it creates folders for mountpoints, which when prevents the real subvol from mounting.
If i see properly, all containers are being...
Right now, there's a huge gap (and need) for elastic persistent storage. Here's a list of current available volume plugins:
https://docs.docker.com/engine/extend/legacy_plugins/#volume-plugins
However it is not up to date yet. I don't like the idea of block storage attached to containers, as...
Configuration has not been changed since years. I mentioned certs because that's my experience from the last years ;)
Problem is I can't debug properly without any output.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.