Cannot start "cgroups" service in Alpine LXC

turtletowerz

New Member
Mar 18, 2024
3
2
3
I'm currently running an empty Alpine LXC and attempting to start cgroups so that something like Docker can be run, but I keep running into the same issue. This is on Promox's Alpine 3.19 template, which uses a new version of OpenRC that defaults to cgroupv2.

Bash:
~: service cgroups start
mount: mounting none on /sys/fs/cgroup failed: Resource busy
sh: write error: Resource busy
sh: write error: Resource busy
sh: write error: Resource busy
sh: write error: Resource busy
sh: write error: Resource busy
sh: write error: Resource busy
sh: write error: Resource busy
sh: write error: Resource busy
Other LXC templates like debian or arch do not have this issue, it's exclusive to Alpine.

Config for the LXC:
Code:
arch: amd64
cores: 1
features: nesting=1
hostname: alpine319-docker
memory: 512
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=A2:48:F5:6B:92:9B,ip=dhcp,type=veth
ostype: alpine
rootfs: data:503/base-503-disk-0.raw,size=4G
swap: 0
template: 1
unprivileged: 1

How would I go about solving this since it's an Alpine-only issue? I understand Docker inside an LXC is not the preferred way to run it, but it's the easiest setup for my use-case.
 
  • Like
Reactions: citizen3rased
How can it be "easiest" if it doesn't work? Alpine in a VM is very small...and you'd be done by now.
It's "easiest" because Docker can run just fine, and I can have an alpine container with Docker running in about a minute compared to Debian which takes much longer just to update packages. The only issue is that docker stats does not report CPU or memory usage because of this issue.
 
I did not recommend Debian. I would use Alpine for this as well, but in a VM.

I find it hilarious that people acknowledge that they are ignoring best practices and recommendations from the product vendor, and then are amazed when there are problems. Not just you, it seems to be a trend around here. Just listen to the wails about Docker-in-CT problems after every upgrade.

Oh well.

ETA: BTW, nobody says you must only run one docker per VM. Put a whole bunch of them in there! Now your overhead per docker can be as low as you want it to be.
 
Last edited:
  • Like
Reactions: Neobin
I find it hilarious that people acknowledge that they are ignoring best practices and recommendations from the product vendor, and then are amazed when there are problems. Not just you, it seems to be a trend around here. Just listen to the wails about Docker-in-CT problems after every upgrade.

Oh well.

ETA: BTW, nobody says you must only run one docker per VM. Put a whole bunch of them in there! Now your overhead per docker can be as low as you want it to be.
Sure, I don't need to but it's my preferred way of doing it. I'm not blatantly ignoring best practices I'm well aware this is the least "recommended" way to do things but it's what I prefer. I was hoping to get some insight as to the technical reason for this (why it works on Debian but not Alpine), not complain about its existence. If the technical reason is that "because xyz Alpine cannot support cgroups in containers" then I would move to a VM or use a Debian container, but since I cannot find any information on why this is an issue I decided to bring it up.
 
Last edited:
  • Like
Reactions: citizen3rased
Well, you are ignoring recommendations. You think it is a good idea, but you are ignoring them.

Just for fun I set up an Alpine 3.19 VM and CT to play with this and I get the same error in the CT. It works fine in the VM. I think the reason is because /sys/fs/cgroup is already mounted in the CT. It exists even if I don't run the command to start it. Probably PVE is configuring that as part of the template setup.

Maybe there's a workaround. Or maybe you don't need to do anything with OpenRC and it will "just work". Have you tried that?

Rootless docker is only supported with systemd anyway. There must be a reason for that. I haven't looked into it in detail but I would bet $5 US there is behind-the-scenes fiddling going on with cgroups and systemd to make nesting work. Reading the systemd mailing list might be enlightening but I don't feel like doing that today.

And that, my friend, is the reason why running Docker in LXC is not recommended. These things seem to change with every upgrade. They fix one or another bug and that breaks one or another workaround. It happens every single time PVE upgrades to a new major version and sometimes on minor ones. Containers inside of containers continues to be a finicky business.

A container is NOT a VM. It is a chroot on steroids.
 
Last edited:
  • Like
Reactions: Neobin
I've also been looking for a solution to this issue for over a year now and the dismissive attitude leads to little discussion on it, unfortunately. I just have to point out that saying it's not best practice so don't even investigate the 'why' is not super helpful in this type of situation (especially when most people are just parroting something they heard and have zero explanation for). Of course warning people of the possible dangers of doing it this way is not wrong and should always be pointed out, but not having concrete reasons why and then shutting down anyone who chooses to investigate why something that should work doesn't just because 'it's not best practice' can't be good for progress in my mind.

FYI, I've been running many Debian 12 LXC containers across a 4 node PVE cluster with most of those running docker containers inside of them for over a year now. I've done full upgrades, opted into early Kernels, including the 6.8 kernel this week and have had no issues. Note: I had plenty of issues doing this on ZFS in the past, at least before version 2.2 and haven't tried again on ZFS since the 2.2 update. You'll know you'll have issues when you start the docker service and look at certain output and logs.. and it's sometimes silent, meaning it looks like it's going to work until you run certain docker images. However, on LVM, with Debian 12, I have not had any of those issues I've had in the past and experience no errors in the logs. Rather than look at just Alpine VM vs Alpine LXC, how about Debian LXC vs Alpine LXC. Why does docker work flawlessly (as far as I can tell) on a Debian 12 LXC but not an Alpine LXC. If we can find out what Debian is doing to allow this to work we would find out how to possibly get it to work on Alpine. Just think it's worth a discussion.

'Just use a VM' is not useful to many people. In my environment I require bind mounts from the host to the guest LXC as well as sharing some PCIE hardware. With a VM I would require NFS mounts which do not work well for my use case and I would have to completely pass through the PCIE devices which also don't work in my situation. In addition, although an Alpine VM doesn't use much resources on it's own, the way a VM allocates RAM vs an LXC container greatly reduces how many VMs I could have compared to LXCs on a RAM constrained system. So it's not like people are just saying they want to run it in LXC to be stubborn and go against the grain.
 
Last edited:
I've also been looking for a solution to this issue for over a year now and the dismissive attitude leads to little discussion on it, unfortunately. I just have to point out that saying it's not best practice so don't even investigate the 'why' is not super helpful in this type of situation (especially when most people are just parroting something they heard and have zero explanation for).
Containers are not remotely virtual machines. They are basically just namespaces. Even today not everything in the Linux kernel is namespace-aware (NFS probably being the most well-known example). And because the kernel is shared with the host there are more restrictions on the system calls that are allowed, which leads to restrictions on what you can do in a container that do not exist for a VM.

You cannot for example load a driver module inside a CT. Sometimes even when you load the driver on the host you can't use it properly in a container because the driver isn't namespace aware (I have spent a lot of time fixing these kinds of issues for the company I work for).

Docker is just another container technology. It too relies on namespaces to separate things. When running docker under lxc you are nesting namespaces. This will not work right unless the enclosing namespace supports what Docker needs. As you have noticed, this is may or may not be true depending on the configuration of the underlying host, what version of Docker, etc.

That's the "why".

Why does docker work flawlessly (as far as I can tell) on a Debian 12 LXC but not an Alpine LXC. If we can find out what Debian is doing to allow this to work we would find out how to possibly get it to work on Alpine. Just think it's worth a discussion.
Debian uses systemd, Alpine does not. Systemd people put considerable effort to making nested containers work. That is the direction to look.

'Just use a VM' is not useful to many people. In my environment I require ...
My situation is special, I need X, Y, and Z. Performance! Small size! Blah, blah, blah. That's all fine, do what you want, but you really shouldn't expect people to help you out when you are going against explicit recommendations.

If you have looked for a year and not found a solution it might be because there isn't one.
 
I appreciate your information on namespaces. I understand there are large differences between VMs and LXC, but finding exactly what the issue is between a working config and nonworking could be fruitful.

My situation is special, I need X, Y, and Z. Performance! Small size! Blah, blah, blah. That's all fine, do what you want, but you really shouldn't expect people to help you out when you are going against explicit recommendations.

If you have looked for a year and not found a solution it might be because there isn't one.

I'm not expecting people to help me out specifically on my projects. However, if you've been around on the internet or even this forum you'd see this is a widely sought after configuration. Many situations need to be tailored for the use case. I just disagree on the overall attitude being taken on this in the community. It is anti inclusive, and does not welcome diversity of thought, while also punishing those who are inquisitive with a very blunt and apathetic 'it's not following best practice so just give up'. Guess what? Best practices change, largely in part due to people pushing limits and innovating to find ways to make things work that previously might have been infeasible. As I said, I was able to get it to work fine on Debian 12, so that is my solution for now. However, it would be very cool if we could figure out a way to get it working on Alpine. Hopefully conversations like this can be more constructive in the future.

Keep in mind, I'm not referring to the people who want to do something a particular way because they 'don't want to learn a new technology', etc. Which is anti-educative.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!