Search results

  1. rholighaus

    LXC container with Wekan/snapd freezes completely

    Hi Stoiko, This is a production system on a PVE subscription so I don't want to disk installing 3.1.6 binaries. I disabled replication to avoid the lxc-freeze command and wait for the 3.1.6 binary to be released for subscription. Thank you.
  2. rholighaus

    LXC container with Wekan/snapd freezes completely

    Please note that I have another thread (sorry) because I think the issue is caused by lxc-freeze in combination with fuse inside the container: https://forum.proxmox.com/threads/ubuntu-snaps-inside-lxc-container-on-proxmox.36463/post-314596
  3. rholighaus

    LXC container with Wekan/snapd freezes completely

    proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve) pve-manager: 6.2-4 (running version: 6.2-4/9824574a) pve-kernel-5.4: 6.2-1 pve-kernel-helper: 6.2-1 pve-kernel-5.3: 6.1-6 pve-kernel-5.4.34-1-pve: 5.4.34-2 pve-kernel-4.15: 5.4-12 pve-kernel-5.3.18-3-pve: 5.3.18-3 pve-kernel-5.3.18-2-pve: 5.3.18-2...
  4. rholighaus

    Ubuntu Snaps inside LXC container on Proxmox

    Hi Wolfgang, looks like I'm now running exactly into this problem with snapd in an LXC container that about once a dey freezes completely when PVE replications sends an lxc-freeze to the container. Replication is hourly, though, so it doesn't always happen. Is there a way to prevent issuing the...
  5. rholighaus

    LXC container with Wekan/snapd freezes completely

    Unfortunately, after updating to 6.2, the container hangs again this morning. Last action was an lxc-freeze which was issued by pve in preparation for a filesystem sync. Killing the lxc-freeze doesn't help. Even issuing an lxc-stop <id> --kill hangs. Any idea how to kill the container without...
  6. rholighaus

    Linux Kernel 5.4 for Proxmox VE

    Did you run an apt-get dist-upgrade? Never run apt-get upgrade in PVE!
  7. rholighaus

    LXC container with Wekan/snapd freezes completely

    okay thank you! I'll keep an eye on the system and will mark this thread as "Solved" when the fix solves the freeze.
  8. rholighaus

    LXC container with Wekan/snapd freezes completely

    Additional information: Looks like the "permission denied" error messages are related to these processes (found after restarting the container): root 91 1 0 07:23 ? 00:00:00 squashfuse /var/lib/snapd/snaps/wekan_807.snap /snap/wekan/807 -o ro,nodev,allow_other,suid root...
  9. rholighaus

    LXC container with Wekan/snapd freezes completely

    We are running the wekan snap package in an LXC container and it works well, but about once a day, it completely freezes, stops responding to ping requests. Only way to stop it is lxc-stop -n 121 --kill and restart it using pct start 121. The container's /var/log/syslog just stops at the time...
  10. rholighaus

    Proxmox and ZFS - Unstable under load/extraneous events affecting zpool drives?

    This causes me to worry as we are running various Windows Server VMs on ZVOLs. Any response from Proxmox team yet?
  11. rholighaus

    ProxMox Implementation of ZFS o_O

    @fabian Just to clarify: Bigger volblocksize uses more space but gives more performance?
  12. rholighaus

    Ubuntu Snaps inside LXC container on Proxmox

    I'm successfully running Wekan via Snap on an Ubuntu 19.04 container (which was now upgraded to 19.10). These are my LXC settings: arch: amd64 cores: 4 features: keyctl=1,nesting=1,fuse=1 hookscript: local:snippets/pve-hook hostname: projekte memory: 2048 net0...
  13. rholighaus

    [SOLVED] Wie kommen die User auf den neuen Server nach der Replication?

    Proxmox ZFS Replikation repliziert regelmäßig von einem Node auf einen oder mehrere andere. Kann man einstellen - ich repliziere z.B. alle 20 Minuten. Für HA benötigt man 3 Server, wovon einer sehr klein sein kann, oder sehr langsam, oder beides. Wir verwenden in unserem Setup zwei Server mit...
  14. rholighaus

    [SOLVED] Wie kommen die User auf den neuen Server nach der Replication?

    Das funktioniert auf jeden Fall. Aber wie gesagt, wenn man Proxmox in einem HA-Setup betreibt, werden die VMs und Container automatisch umgezogen, wenn ein Node ausfällt, dabei nehmen sie ihre IP-Adresse mit. Das geht sogar ohne Load Balancer.
  15. rholighaus

    [SOLVED] Wie kommen die User auf den neuen Server nach der Replication?

    Ich weiß nicht, ob ich Frage richtig verstanden habe. Falls JayneWayne sich auf User im Sinne von Administratoren des PVE bezieht, könnte das mit dem HAProxy oder ähnlichem stimmen. Wenn es sich um User der Container bzw. VMs handelt: Bei einer Migration nehmen die doch ihre IPs mit. Oder...
  16. rholighaus

    Proxmox VE 6.1 released!

    Hi zaxx, Could you please share a bit more about your MTU issue so we can all learn from it? Thanks!
  17. rholighaus

    ZFS Storage Replication - transferred size bigger than actual disc size

    Maybe Bug 1824 and the published (but not implemented) patch will do the trick. I'm currently testing the patch ( zfs send -Rpvc instead of zfs send -Rpv). I will report. Either replication has to check whether compression is enabled and then use the -c flag (but compression is enabled by...
  18. rholighaus

    ZFS Storage Replication - transferred size bigger than actual disc size

    We are using ZFS storage replication. A container running on host carrier-1 is replicated to carrier-2. NAME USED AVAIL REFER MOUNTPOINT rpool/data/subvol-115-disk-1 3.29G 96.7G 3.26G /rpool/data/subvol-115-disk-1 rpool/data/subvol-115-disk-2 333G 82.6G...
  19. rholighaus

    Operational issues with ZFS and pvesr

    HI Don, You do not want to run znapzend and PVE storage replication between the same nodes. They do conflict and most likely will cause PVE storage replication to fail at some point. We are, however, successfully running a combination of local PVE ZFS replication and znapzend for off-site...