Issue moving containers from LXC to Proxmox

guywhotypeslow

New Member
Jun 23, 2022
8
2
3
I am new to Proxmox and this is my first time posting, but I will do my best to provide as much information as I can. I have been running a plain Debian server for a number of years now but decided that I wanted to give Proxmox a try for it's additional features and ease of management, especially clustering. I have 16 LXC containers on my original server, each running a different service. I have 'successfully' installed Proxmox on a new server and moved all 16 containers over to it. When I moved the first few containers the services were running so I proceeded with the rest. However, after the services on one of them failed I went back found that it looks like all the containers have the same or similar issue but that it just wasn't affecting them enough to stop the services like it did on the others. Given there are so many containers I will just focus on one and hopefully getting that one working will carry over to the rest.

The containers start just fine. For many of them the services even appear to be operational (except for a few really important ones) but all the containers are degraded. It seems I get at least three failed jobs on every container.

Here is the output of "systemctl --failed" on the container I will be using as my example:

UNIT LOAD ACTIVE SUB DESCRIPTION
● sys-kernel-config.mount loaded failed failed Kernel Configuration File System
● sys-kernel-debug.mount loaded failed failed Kernel Debug File System
● clamav-daemon.service loaded failed failed Clam AntiVirus userspace daemon
● systemd-journald-audit.socket loaded failed failed Journal Audit Socket

Minus the ClamAV those are the services which seem to fail on every container.


Method used to move the containers:
cd /var/lib/lxc/{container}/rootfs
tar --exclude=dev --exclude=sys --exclude=proc -czvf /home/root/{container}_template.tar.gz ./
(I included the --excludes to avoid errors about making those directories in unprivileged containers per recommendation of the instructions I followed but I also tried this without those and creating a privileged container instead but got the same results.)
scp /home/root/{container}_template.tar.gz root@{proxmox_IP}:/var/lib/vz/template/cache
Use the template to create the container via the GUI.

The container I am using as my example is an email server. I'm including the configs for the old and the new at the end as well as attaching the output of "lxc-start -n 100 -lDEBUG -o /tmp/lxc-100.log" as a file.

The output of "pveversion -v" is:

proxmox-ve: 7.2-1 (running kernel: 5.15.35-1-pve)
pve-manager: 7.2-4 (running version: 7.2-4/ca9d43cc)
pve-kernel-5.15: 7.2-3
pve-kernel-helper: 7.2-3
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 16.2.7
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-2
libpve-storage-perl: 7.2-4
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.1-1
proxmox-backup-file-restore: 2.2.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-2
pve-ha-manager: 3.3-4
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1


My old system was:
Debian Buster
LXC 3.0.3

Pretty much all of my containers are running Debian Buster.

The configuration for the new container:

arch: amd64
cores: 1
features: nesting=1
hostname: mail
memory: 512
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=A6:2D:A4:AE:F1:BC,ip=dhcp,type=veth
ostype: debian
rootfs: CT-Data:subvol-100-disk-0,size=50G
swap: 512
unprivileged: 1


The configuration for the old container:

# Distribution configuration
lxc.include = /usr/share/lxc/config/common.conf
lxc.arch = linux64

# Container specific configuration
lxc.apparmor.profile = generated
lxc.apparmor.allow_nesting = 1
lxc.rootfs.path = dir:/var/lib/lxc/mail/rootfs
lxc.uts.name = mail

# Network configuration
lxc.net.0.type = veth
lxc.net.0.link = br0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:ac:b0:01

# Auto Start with Host
lxc.start.auto = 1
 

Attachments

  • lxc-100.log
    19.2 KB · Views: 2

guywhotypeslow

New Member
Jun 23, 2022
8
2
3
Given I have 16 containers this is effectively make or break for whether or not I use Proxmox. If I have to rebuild all of them because they can't be moved properly then I will just keep doing what has worked for me for years. The extra features will not be worth the effort of a total rebuild.

I am holding out hope though. Surely I must be missing something. People talk bout moving containers from LXC to Proxmox without issue or I wouldn't have found instructions on how to do it. However, if I don't have an answer within a week or so I will simply move on and chalk this up to a learning experience and never look back.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!