Recent content by andwoo8182

  1. A

    Recompiling apparmor on new kernel

    I am attempting to test a few newer but unsupported kernels, which boot fine, but apparmor breaks, diminishing my container security. I am attempting to test a high LXC container count (1350) (see threads below). I could test the containers without apparmor, but I would prefer to try do this...
  2. A

    Scaling past 1350 Containers seccomp errors & vmap allocation failure

    I tested with a privileged containers & got the same result, so I'm guessing that points more to a kernel issue that something LXC specific around unprivileged containers (like the bpf_jit_limit, that seems to be involved).
  3. A

    Scaling past 1350 Containers seccomp errors & vmap allocation failure

    From what I've read on various posts, related, but mostly unrelated, it sounds like it is either fragmentation, or zfs/cgroups usage of certain kernel memory areas. today i tried adding root memlock limits, as they weren't specified, and I wasn't sure if there was an interaction there, but no...
  4. A

    Scaling past 1350 Containers seccomp errors & vmap allocation failure

    So after spending much time diagnosing increasing load issues that seemed disproportionate to the increasing container count, I eventually realised that my Router Advertisments were sending far too many multicast messages to configure ipv6, resulting in about 3x the load average I now have...
  5. A

    Scaling past 1350 Containers seccomp errors & vmap allocation failure

    Also, my sysctl conf: vm.swappiness=100 kernel.keys.maxkeys = 100000000 kernel.keys.maxbytes = 200000000 kernel.dmesg_restrict = 1 vm.max_map_count = 262144 net.ipv6.conf.default.autoconf = 0 fs.inotify.max_queued_events = 167772160 fs.inotify.max_user_instances = 167772160 # def:128...
  6. A

    Scaling past 1350 Containers seccomp errors & vmap allocation failure

    Yeah I have set a max for ZFS ARC of 64GB - I tried 32GB, but later found out that when I set a max i need to set a dnode limit %, as i encountered some issues there with pruning. Here is my zfs.conf: options zfs zfs_arc_max=68719476736 options zfs l2arc_noprefetch=0 options zfs...
  7. A

    Scaling past 1350 Containers seccomp errors & vmap allocation failure

    Yeah i saw the changelog for recent update noting improvements for nodes with large numbers of containers, with particular mention of alpine linux - that definitely got me interested, but i think I still have a steep learning curve there before I'm able to get a fully operational container under...
  8. A

    Scaling past 1350 Containers seccomp errors & vmap allocation failure

    Thanks for the reply - yeah i wasn't sure of the syntax, so I had added vmalloc=32768M, and I believe I've had a full retest with that parameter in, but I'm going to have to retry to confirm if it makes a difference. I was testing again last night with Debian containers instead, and whilst I got...
  9. A

    Scaling past 1350 Containers seccomp errors & vmap allocation failure

    I have also posted on the linuxcontainers site: https://discuss.linuxcontainers.org/t/scaling-past-1350-containers-seccomp-errors/8165 But no feedback yet.
  10. A

    Scaling past 1350 Containers seccomp errors & vmap allocation failure

    Hello, I have been slowly but surely trying to scale on of my nodes & attempting to find the limits of the hardware in order to settle on a good level to load the server at long term. I have encountered various constraints along the way and have recently got stuck at 1350 containers of the...
  11. A

    [SOLVED] Can't scale LXC CTs past 670, they fail to start.

    Hi, apologies for not getting back to you on this thread. I brought a new host online using new architecture, and it presented multiple new issues that looks a long time to work out & resolve, some of which were solved by the new 5.4 kernel. In regards to this issue I was encountering, it has...
  12. A

    Linux Kernel 5.4 for Proxmox VE

    Thanks, this version appears to have cleared my dual amd epyc host of segfaults & general protection faults, to much relief.
  13. A

    [SOLVED] Can't scale LXC CTs past 670, they fail to start.

    Hi, thanks very much for getting back to me. I had originally come across the directory issue, but I thought perhaps the sub-directories would be handledl - in any case, when I tried deleting them previously I think I didnt realise all the directories that needed to be deleted. That command...
  14. A

    [SOLVED] Can't scale LXC CTs past 670, they fail to start.

    Hello everyone, my first post here, so please go slow :) I have a single Proxmox host/node that is running approximately 670 containers just fine, but it struggles to start any more - and when at this max number of containers running, i slowly notice the odd container losing network...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!