[SOLVED] LXC Containers not Migrating Properly in Proxmox VE 7.0

Dec 17, 2021
21
2
8
Ohio
I am having an unusual problem where, when I migrate a container from node 1 to node 2, it breaks. I migrate the container, the container shows up in node 2's VM list just fine, but when I try to start the container, it doesn't boot. It comes up with this errer

"TASK ERROR: unable to open file '/var/lib/lxc/117/rules.seccomp.tmp.202704' - No such file or directory"

I did a little digging in node 2's shell and found out that /var/lib/lxc/117 doesn't exist. But the /var/lib/lxc/117 does exist in node 1. So it moved the container but never moved the actual data for the container.
The containers are stored on a NAS which all nodes have access too

So I think from what I am seeing, the container migrates but the data for the container doesn't migrate. I don't know how to fix this. I was able to migrate the container's data over with a simple SCP command but I don't want to do that each time I want to migrate a container plus I have HA on and HA can't do that automatically unless I write a script for it or something. Is this a bug?

(Clarify, virtual machines migrate just fine with little to no problem, it is exclusively containers having this issue)
 
hi,

I did a little digging in node 2's shell and found out that /var/lib/lxc/117 doesn't exist. But the /var/lib/lxc/117 does exist in node 1.
does /var/lib/lxc exist on node 2?

could you post the output from pveversion -v from both nodes?

does this issue happen only with that container? or is it reproducible with other containers as well?
 
hi,


does /var/lib/lxc exist on node 2?

could you post the output from pveversion -v from both nodes?

does this issue happen only with that container? or is it reproducible with other containers as well?
It's reproducible with all containers in the cluster. No VMs have this issue. Some Containers are on local storage, but we have a few on shared storage and they still have this issue too (unsure if having local or shared storage would matter in this)
And /var/lib/lxe does exist on all 3 nodes. (I should mention I have three nodes and I have officially tested the problem on all 3 nodes and it does the same thing)

Here is each nodes info. I can record and upload what's happening and leave the link from YouTube or something so you or someone can actually watch what happens if you'd like. It might be easier for explaining purposes.

Code:
root@proxmox-3:~# pveversion -v
proxmox-ve: 7.0-2 (running kernel: 5.11.22-1-pve)
pve-manager: 7.0-8 (running version: 7.0-8/b1dbf562)
pve-kernel-5.11: 7.0-3
pve-kernel-helper: 7.0-3
pve-kernel-5.11.22-1-pve: 5.11.22-2
ceph-fuse: 15.2.13-pve1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.0.0-1+pve5
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.1.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-4
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-7
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-2
lxcfs: 4.0.8-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.1-1
proxmox-backup-file-restore: 2.0.1-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.2-4
pve-cluster: 7.0-3
pve-container: 4.0-5
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 6.0.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-7
smartmontools: 7.2-1
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.4-pve1

Code:
root@proxmox-2:~# pveversion -v
proxmox-ve: 7.0-2 (running kernel: 5.11.22-1-pve)
pve-manager: 7.0-8 (running version: 7.0-8/b1dbf562)
pve-kernel-5.11: 7.0-3
pve-kernel-helper: 7.0-3
pve-kernel-5.11.22-1-pve: 5.11.22-2
ceph-fuse: 15.2.13-pve1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.0.0-1+pve5
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.1.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-4
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-7
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-2
lxcfs: 4.0.8-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.1-1
proxmox-backup-file-restore: 2.0.1-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.2-4
pve-cluster: 7.0-3
pve-container: 4.0-5
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 6.0.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-7
smartmontools: 7.2-1
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.4-pve1

Code:
root@proxmox-1:~# pveversion -v
proxmox-ve: 7.0-2 (running kernel: 5.11.22-1-pve)
pve-manager: 7.0-8 (running version: 7.0-8/b1dbf562)
pve-kernel-5.11: 7.0-3
pve-kernel-helper: 7.0-3
pve-kernel-5.11.22-1-pve: 5.11.22-2
ceph-fuse: 15.2.13-pve1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.0.0-1+pve5
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.1.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-4
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-7
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-2
lxcfs: 4.0.8-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.1-1
proxmox-backup-file-restore: 2.0.1-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.2-4
pve-cluster: 7.0-3
pve-container: 4.0-5
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 6.0.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-7
smartmontools: 7.2-1
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.4-pve1
 
thanks for the outputs :)

the full error logs from the migration should suffice


could you also try to upgrade your packages to the latest available versions and see if the problem persists?
I updated all packages and the problem still persist. Is there any specific logs you want to see? Migration failure.PNG
 

Attachments

  • Unable to open file var lib lxc 101.PNG
    Unable to open file var lib lxc 101.PNG
    68.1 KB · Views: 5
thanks for the outputs :)

the full error logs from the migration should suffice


could you also try to upgrade your packages to the latest available versions and see if the problem persists?
I fixed the problem :)
After updating Proxmox packages, I notice that I was still having issues. So I decided to entriely upgrade my system from Proxmox VE 7.0 to 7.2 from the No-Subscription repo and that fixed the problem! Since I don't use this cluster in production, I'm not to worried about having enterprise repos yet.
Thank you for your help though, I appriciate it! I should have updated a lot sooner!
 
  • Like
Reactions: oguz
fixed the problem :)
After updating Proxmox packages, I notice that I was still having issues. So I decided to entriely upgrade my system from Proxmox VE 7.0 to 7.2 from the No-Subscription repo and that fixed the problem! Since I don't use this cluster in production, I'm not to worried about having enterprise repos yet.
Thank you for your help though, I appriciate it! I should have updated a lot sooner!
great!

yeah, you should always upgrade all the available packages unless you have a good reason not to.

for the future always do the following (with the correct repositories configured [0]):
Code:
apt update
apt dist-upgrade
reboot # if there was a kernel upgrade

you can mark the thread as [SOLVED] ;)

[0]: https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_no_subscription_repo
 
Hello,
My container does not even start to migrate.
ERROR: migration aborted (duration 00:00:00): storage 'ssdpool2' is not available on node 'proxmox1'
And i have the latest packages:
proxmox-ve: 7.2-1 (running kernel: 5.15.53-1-pve)
pve-manager: 7.2-7 (running version: 7.2-7/d0dd0e85)
pve-kernel-5.15: 7.2-10

Why it does not want to migrate to another storage? Why it keeps old storage config?
Why it does not want to copy over the offline container???
Why it does not even ask to which storage to copy???
How to enable the migration on the cluster?
 
Last edited:
@promok please keep your questions to a single thread..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!