Lxc container causing very high load and lots of running processes

rofrofrof1

New Member
Jan 6, 2025
4
1
3
We are currently migrating old OpenVZ containers to Proxmox LXC. Today, a few hours after migrating a container, I noticed that the server load had increased significantly and that an extremely large number of processes with state Rs were running in the container. The problem did not resolve itself. Only when I increased the number of cores assigned to the container to 32 did the server load and the many running processes suddenly drop. I then reduced the number of cores back to the previous value and the load remained within the normal range. So it didn't go up again.

It seems to me that if the running processes exceed the allocated cores, this starts a chain reaction that does not resolve itself.

The container had 16 cores allocated both on the OpenVZ HV and now in Proxmox. I am therefore wondering why the problem did not occur before the migration if the number of cores allocated is apparently insufficient. Do you have any ideas?
 
Do you have any ideas?
Don't think even the magicians could help, if you don't provide some minimal info:

1. The process you used to convert/export from OpenVZ to Proxmox LXC.
2. The old HW HV it ran OpenVZ vs the new HW PVE node.
3. The LXC config, namely output for: pct config <CTID> .
4. Some basic understanding/background of the LXC workload.
 
Don't think even the magicians could help, if you don't provide some minimal info:

1. The process you used to convert/export from OpenVZ to Proxmox LXC.
2. The old HW HV it ran OpenVZ vs the new HW PVE node.
3. The LXC config, namely output for: pct config <CTID> .
4. Some basic understanding/background of the LXC workload.

Please excuse me. I didn't want an answer from the mages either. I have provided the information that I thought was relevant.

1. The data was copied from one container to another.
2. Same hardware on both Hv
3.

Code:
pct config 101
arch: amd64
cores: 16
features: nesting=1
hostname: ct1
memory: 131072
net0: name=eth0,bridge=vmbr0,firewall=1,gw=xxx,gw6=xxx,hwaddr=xxx,ip=xxx,ip6=xxx,type=veth
ostype: centos
rootfs: local-zfs:subvol-101-disk-0,size=2000G
swap: 0
unprivileged: 1

4. Basic LAMP
 
Please excuse me. I didn't want an answer from the mages either. I have provided the information that I thought was relevant.

1. The data was copied from one container to another.
2. Same hardware on both Hv
3.

Code:
pct config 101
arch: amd64
cores: 16
features: nesting=1
hostname: ct1
memory: 131072
net0: name=eth0,bridge=vmbr0,firewall=1,gw=xxx,gw6=xxx,hwaddr=xxx,ip=xxx,ip6=xxx,type=veth
ostype: centos
rootfs: local-zfs:subvol-101-disk-0,size=2000G
swap: 0
unprivileged: 1

4. Basic LAMP
This is a known issue... enable at least a small SWAP for your LXC... 512MB or so and the problem will solve instant.... no need to restart Container....
 
  • Like
Reactions: Kingneutron
I have no experience with a CentOS LXC, but how did you initially create the LXC before copying over the data. Same CentOS version template?

Have you restarted that LXC since it's installation & "settling down" & still observed the same behavior?

Have you played /tested with the amount of RAM/swap?

Same hardware on both Hv
LXC's use the host kernel - so you would have to compare the running kernels on both servers.
 
This is a known issue... enable at least a small SWAP for your LXC... 512MB or so and the problem will solve instant.... no need to restart Container....

Thank you very much. I will try this asap!

I have no experience with a CentOS LXC, but how did you initially create the LXC before copying over the data. Same CentOS version template?

Have you restarted that LXC since it's installation & "settling down" & still observed the same behavior?

Have you played /tested with the amount of RAM/swap?


LXC's use the host kernel - so you would have to compare the running kernels on both servers.

Thank you as well. I've created the lxc container in proxmox and copied the files from vz container directly into it. So there was no conversion process or something like that.
Have you restarted that LXC since it's installation & "settling down" & still observed the same behavior?

I haven't done that yet.
Have you played /tested with the amount of RAM/swap?

There was always enough memory. But swap is set to 0, which I will now increase.

LXC's use the host kernel - so you would have to compare the running kernels on both servers.

OpenVZ 7 uses old kernel 3.10.0. Of course, the kernel under Debian/Proxmox is much more up-to-date. What exactly should I compare?
 
  • Like
Reactions: Kingneutron
I've created the lxc container in proxmox
I understand, but what template did you use? Just copying over the content is probably not enough. You probably need the same distro template (CentOS ver.) & then replace the content with your own.

OpenVZ 7 uses old kernel 3.10.0. Of course, the kernel under Debian/Proxmox is much more up-to-date.
My point exactly. Different kernels will have different outcomes.
 
Of course I used the identical templates (AlmaLinux 8 vz ct to Almalinux 8 lxc ct). But I am not sure if it is relevant for my problem.

For me, it's more a question of how LXC reacts when the container wants to process more processes than it has cores. If you say that you cannot reproduce this problem or are not aware of it, that already helps.
 
For me, it's more a question of how LXC reacts when the container wants to process more processes than it has cores.
It schedules them like any other process also does?