cgroups

  1. B

    kvm runs in user.slice cgroup instead of qemu.slice, flooding syslog

    Running a VM via the shell (to institute additional command line parameters) results in the syslog being flooded with qmeventd events like the following: qmeventd[1038]: error parsing vmid for 3371241: no matching qemu.slice cgroup entry qmeventd[1038]: could not get vmid from pid 3371241...
  2. S

    Issues with cgroups (cpuset) and kernel module inside CT

    I try to run a Hashicorp Nomad agent inside a CT that should spawn Tasks using its `exec` driver. See: https://www.nomadproject.io/docs/drivers/exec Essentially it will isolate the process using chroot/cgroups. This however fails in my container. I get 2 errors 1) The cpuset management...
  3. L

    LXC Metriken erweitern (memfree/MEM_FREE)

    Wir fragen uns, wie wir MEM_FREE (gemäß free -m unter Debian) als weitere Metrik exportieren können, um es pro LXC-Container zu monitoren. Hintergrund/ Problem:
 Wir hatten bei einem LXC Container mit einer Debian basierten Installation von NextPVR (*) das Problem, dass wir bei Aufnahmen ab...
  4. M

    LXC CPU pinning

    Hello, I met a requirement to pin some container to certain CPU cores forever. This comes from the specific software we installed into the container and its licensing model. It checks CPUs and if it finds any dissimilarity with license file, it refuses to use that license file. My machine has...
  5. J

    cgroups not working inside LXC containers

    I am trying to get a kubernetes node to run on a LXC container (tried with Ubuntu and Alpine so far), but I can't get it to work due to a problem with the cgroups. I am trying with a privileged LXC container, and I already configured lxc to that container at /etc/pve/lxc/200.conf with...
  6. S

    [SOLVED] PPP on multiple containers not working

    Hi, I am trying to get PPPoE running on multiple containers. I have it running on my first container without any issues. Getting it up and running on the second one gives issues. My initial setup for container 1: Loaded the following modules on the host server and added them to /etc/modules...
  7. R

    Running docker and autodev in a shared env.

    Hello, i would like to know the risk to run a LXC container with the following ruleset in a shared public env.: lxc.apparmor.profile: unconfined lxc.cgroup.devices.allow: a lxc.cap.drop: lxc.cgroup.devices.allow: c 10:200 rwm lxc.hook.autodev: sh -c "modprobe tun; cd ${LXC_ROOTFS_MOUNT}/dev...
  8. F

    how to disable memory cgroups

    I disabled memory groups in the Kernel command line, because I care less about how much memory my containers use, I want them to use all the available memory. But the the containers do not start since memory cgroups is disabled. I need to remove these 2 lines lxc.cgroup.memory.limit_in_bytes =...
  9. U

    LXC cgroups not cleaned up on container shutdown, can't restart

    Hello, One of our hosts is not cleaning up the cgroups for shutdown containers, and it prevents them from starting again. Here is a snippet of the log file I obtained by starting the container with: /usr/bin/lxc-start -F --logfile=/root/135.log --logpriority=DEBUG -n 135 lxc-start 135...
  10. L

    LXC / CT limitations , what to keep in mind?

    LXC containers have obvious benefits - I really especially like that you can supply IP, hostname, root fs, and even SSH key right off the bat, then have basically bare metal performance. But I've now run into at least 2 applications which either need modification or something else to get...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!