LXC / CT limitations , what to keep in mind?

locusofself

Member
Mar 29, 2016
39
3
8
40
LXC containers have obvious benefits - I really especially like that you can supply IP, hostname, root fs, and even SSH key right off the bat, then have basically bare metal performance.

But I've now run into at least 2 applications which either need modification or something else to get running:

FreeIPA wants something with cgroups

Freeswitch (VoiP) wants udev .


Are there workarounds, or are there some apps which really just wont work in CT / LXC environment still?


Has LXD solved some of these problems in their offering?


To be clear I'm not asking for help solving my specific issues, just want to what "gotchas" there still are with CTs on Proxmox in particular .

Thanks for any info
 
You normally cannot run near-realtime-requirements for applications, because they often need a more direct hardware access, which is not possible (in the default LXC configuration). One of them is a VoIP-based software, because you need a very precise timing and some solutions require special kernel modules for that. Obviously that cannot work securely out-of-the-box with LXC. You need to loosen the security of that particular container to get these applications running, but if often works (technically).

I'd suggest to move all services with tighter hardware relation into a KVM.
 
You normally cannot run near-realtime-requirements for applications, because they often need a more direct hardware access, which is not possible (in the default LXC configuration). One of them is a VoIP-based software, because you need a very precise timing and some solutions require special kernel modules for that. Obviously that cannot work securely out-of-the-box with LXC. You need to loosen the security of that particular container to get these applications running, but if often works (technically).

I'd suggest to move all services with tighter hardware relation into a KVM.

It's interesting that LXC aims to provide close to bare-metal performance yet this really does seem to be an issue (real time performance), at least from what you and several others have said (Including my researching Docker).

Its too bad too, because I honestly have a legitimate, important use case to containerize freeswitch or asterisk and I'm not sure if i will succeed at this.

If anyone else has experience with VoIP and containers please do let me know if you have any good solutions.
 
It's interesting that LXC aims to provide close to bare-metal performance yet this really does seem to be an issue (real time performance), at least from what you and several others have said (Including my researching Docker).

Real time is very difficult and can only be done on kernel level (in Linux). I'm not talking about "GUI" stuff, but signal processing. Linux is not even a real time operating system which makes it even harder to do "correct" signal processing. This can only be ensured with non-schedulable, non-interruptible code in kernel space to not mess up timing (in sub micro resolution).

Its too bad too, because I honestly have a legitimate, important use case to containerize freeswitch or asterisk and I'm not sure if i will succeed at this.

Of all the different things to virtualize, a VoIP application is one of the hardest. You can succeed, but you have to loosen the security of that specific container, so you'll loose the separability, security and probably the migratability of that container. Those problem can only be solved with a KVM-VM which emulates everything and therefore abstracts it better.
 
posted as "Freeswitch (VoIP) CT/LXC Failed to set SCHED_FIFO scheduler" already...

Hello,

It is the first time I install FreeSwitch on ProxMox 4.3 CT. After a reboot, I could start FreeSwitch (as root), but I got two errors:

1- ERROR: Failed to set SCHED_FIFO scheduler (Operation not permitted);
2- ERROR: Could not set nice level.

Googling a little bit, I found someone having the same issues, when running FreeSwitch on Docker and pointing to this link for workaround: https://github.com/docker/docker/pull/23430. Adding two additional parameters: --cpu-rt-period and --cpu-rt-runtime.

Is there a similar solution/workaround for Promox 4.3?

Thank you,

Victor
 
Last edited by a moderator:
WE have same issue, and really would like to see fs running on Proxmox LXC .. it suppoused to be more trustable in terms of time handling than a KVM........right? Mainly on heavy loads...
As Victor said it shouldnt be so hard to implement.. anyone with LXC expertise on how to apply those parameters (--cpu-rt-period and --cpu-rt-runtime) on LXC Proxmox?
 
WE have same issue, and really would like to see fs running on Proxmox LXC .. it suppoused to be more trustable in terms of time handling than a KVM........right? Mainly on heavy loads...
As Victor said it shouldnt be so hard to implement.. anyone with LXC expertise on how to apply those parameters (--cpu-rt-period and --cpu-rt-runtime) on LXC Proxmox?

Hello Titux,

Of course Proxmox 4 is a great VE for VoIP. Here is my use case and how I work around the --cpu-rt-period and --cpu-rt-runtime issue.

- Proxmox Kernel doesn't enable CONFIG_RT_GROUP_SCHED by default. They may have a some reasons for not enabling it by default.

https://www.kernel.org/doc/Documentation/scheduler/sched-rt-group.txt

1. I re-compile the Kernel with CONFIG_RT_GROUP_SCHED enabled.

2. At the startup (Promox host), enable a global rt_runtime_us. The only way I found is to start and stop a tiny container at runtime as follow. The ../lxc/ pseudo file system appears only after the first CT is started...

vim /etc/rc.local
# 103 a tiny debian CT
lxc-start -n 103
lxc-stop -n 103 -k
echo 475000 > /sys/fs/cgroup/cpu/lxc/cpu.rt_runtime_us

3. Assign the cgroup in question for the local container
vim /usr/share/lxc/config/centos.common.conf
....
# This derives from the global common config
lxc.include = /usr/share/lxc/config/common.conf
lxc.cgroup.cpu.rt_runtime_us = 475000
...

My use case involve only one or two CT (media server) per physical server (Promox host) and the rest will be KVM (Proxmox VM).
(I think most VoIP deployment would be similar).

Thanks,

Victor
 
Thanks Victor, it was very detailed!
So implementing freeswitch will not be so straight forward..as I tought... I have a production proxmox but can not recompile there..
I have 2 options:
1-Install a lab trying to recompile the kernel and test..
2-use docker..(it looks they have it resolved already)
http://sipxcom.org/dockerizing-freeswitch/
 
Thanks Victor, it was very detailed!
So implementing freeswitch will not be so straight forward..as I tought... I have a production proxmox but can not recompile there..
I have 2 options:
1-Install a lab trying to recompile the kernel and test..
2-use docker..(it looks they have it resolved already)
http://sipxcom.org/dockerizing-freeswitch/

For production, I would (strongly) suggest you to go with option 1.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!