I am new to Proxmox and this is my first time posting, but I will do my best to provide as much information as I can. I have been running a plain Debian server for a number of years now but decided that I wanted to give Proxmox a try for it's additional features and ease of management, especially clustering. I have 16 LXC containers on my original server, each running a different service. I have 'successfully' installed Proxmox on a new server and moved all 16 containers over to it. When I moved the first few containers the services were running so I proceeded with the rest. However, after the services on one of them failed I went back found that it looks like all the containers have the same or similar issue but that it just wasn't affecting them enough to stop the services like it did on the others. Given there are so many containers I will just focus on one and hopefully getting that one working will carry over to the rest.
The containers start just fine. For many of them the services even appear to be operational (except for a few really important ones) but all the containers are degraded. It seems I get at least three failed jobs on every container.
Here is the output of "systemctl --failed" on the container I will be using as my example:
UNIT LOAD ACTIVE SUB DESCRIPTION
● sys-kernel-config.mount loaded failed failed Kernel Configuration File System
● sys-kernel-debug.mount loaded failed failed Kernel Debug File System
● clamav-daemon.service loaded failed failed Clam AntiVirus userspace daemon
● systemd-journald-audit.socket loaded failed failed Journal Audit Socket
Minus the ClamAV those are the services which seem to fail on every container.
Method used to move the containers:
cd /var/lib/lxc/{container}/rootfs
tar --exclude=dev --exclude=sys --exclude=proc -czvf /home/root/{container}_template.tar.gz ./
(I included the --excludes to avoid errors about making those directories in unprivileged containers per recommendation of the instructions I followed but I also tried this without those and creating a privileged container instead but got the same results.)
scp /home/root/{container}_template.tar.gz root@{proxmox_IP}:/var/lib/vz/template/cache
Use the template to create the container via the GUI.
The container I am using as my example is an email server. I'm including the configs for the old and the new at the end as well as attaching the output of "lxc-start -n 100 -lDEBUG -o /tmp/lxc-100.log" as a file.
The output of "pveversion -v" is:
proxmox-ve: 7.2-1 (running kernel: 5.15.35-1-pve)
pve-manager: 7.2-4 (running version: 7.2-4/ca9d43cc)
pve-kernel-5.15: 7.2-3
pve-kernel-helper: 7.2-3
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 16.2.7
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-2
libpve-storage-perl: 7.2-4
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.1-1
proxmox-backup-file-restore: 2.2.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-2
pve-ha-manager: 3.3-4
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1
My old system was:
Debian Buster
LXC 3.0.3
Pretty much all of my containers are running Debian Buster.
The configuration for the new container:
arch: amd64
cores: 1
features: nesting=1
hostname: mail
memory: 512
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=A6:2D:A4:AE:F1:BC,ip=dhcp,type=veth
ostype: debian
rootfs: CT-Data:subvol-100-disk-0,size=50G
swap: 512
unprivileged: 1
The configuration for the old container:
# Distribution configuration
lxc.include = /usr/share/lxc/config/common.conf
lxc.arch = linux64
# Container specific configuration
lxc.apparmor.profile = generated
lxc.apparmor.allow_nesting = 1
lxc.rootfs.path = dir:/var/lib/lxc/mail/rootfs
lxc.uts.name = mail
# Network configuration
lxc.net.0.type = veth
lxc.net.0.link = br0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:ac:b0:01
# Auto Start with Host
lxc.start.auto = 1
The containers start just fine. For many of them the services even appear to be operational (except for a few really important ones) but all the containers are degraded. It seems I get at least three failed jobs on every container.
Here is the output of "systemctl --failed" on the container I will be using as my example:
UNIT LOAD ACTIVE SUB DESCRIPTION
● sys-kernel-config.mount loaded failed failed Kernel Configuration File System
● sys-kernel-debug.mount loaded failed failed Kernel Debug File System
● clamav-daemon.service loaded failed failed Clam AntiVirus userspace daemon
● systemd-journald-audit.socket loaded failed failed Journal Audit Socket
Minus the ClamAV those are the services which seem to fail on every container.
Method used to move the containers:
cd /var/lib/lxc/{container}/rootfs
tar --exclude=dev --exclude=sys --exclude=proc -czvf /home/root/{container}_template.tar.gz ./
(I included the --excludes to avoid errors about making those directories in unprivileged containers per recommendation of the instructions I followed but I also tried this without those and creating a privileged container instead but got the same results.)
scp /home/root/{container}_template.tar.gz root@{proxmox_IP}:/var/lib/vz/template/cache
Use the template to create the container via the GUI.
The container I am using as my example is an email server. I'm including the configs for the old and the new at the end as well as attaching the output of "lxc-start -n 100 -lDEBUG -o /tmp/lxc-100.log" as a file.
The output of "pveversion -v" is:
proxmox-ve: 7.2-1 (running kernel: 5.15.35-1-pve)
pve-manager: 7.2-4 (running version: 7.2-4/ca9d43cc)
pve-kernel-5.15: 7.2-3
pve-kernel-helper: 7.2-3
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 16.2.7
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-2
libpve-storage-perl: 7.2-4
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.1-1
proxmox-backup-file-restore: 2.2.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-2
pve-ha-manager: 3.3-4
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1
My old system was:
Debian Buster
LXC 3.0.3
Pretty much all of my containers are running Debian Buster.
The configuration for the new container:
arch: amd64
cores: 1
features: nesting=1
hostname: mail
memory: 512
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=A6:2D:A4:AE:F1:BC,ip=dhcp,type=veth
ostype: debian
rootfs: CT-Data:subvol-100-disk-0,size=50G
swap: 512
unprivileged: 1
The configuration for the old container:
# Distribution configuration
lxc.include = /usr/share/lxc/config/common.conf
lxc.arch = linux64
# Container specific configuration
lxc.apparmor.profile = generated
lxc.apparmor.allow_nesting = 1
lxc.rootfs.path = dir:/var/lib/lxc/mail/rootfs
lxc.uts.name = mail
# Network configuration
lxc.net.0.type = veth
lxc.net.0.link = br0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:ac:b0:01
# Auto Start with Host
lxc.start.auto = 1