Containers won't start after manual migration

MrAlfabet

Member
Jul 17, 2020
8
0
6
37
After a recent update of PVE, two of my hosts (Host0, Host1, both root on ZFS, non-UEFI) failed to boot. https://forum.proxmox.com/threads/z...ystem-entering-rescue-mode.75122/#post-334973

I failed trying to get them up&running, so I reinstalled Host1 (with UEFI enabled this time, version installed is newer than the old, since I hadn't updated in a few weeks), plugged in the disks from Host0, copied the zfs volumes of the containers, and copied the .conf files to /etc/pve/nodes/Host1/lxc/. All containers started up beautifully.

After a reboot though, none of the containers will start. The webGUI will return 'Task OK', but the container doesn't start.

If I restore the container from backup, it boots right after restore. If I reboot the host, it will not start.

I found a hiccup, but I'm not sure where to go to fix it. If I try 'pct mount 103' (where 103 is one of the container IDs), it returns

Code:
mounting container failed
cannot open directory //rpool/data/subvol-103-disk-1: No such file or directory

I think the problem is in the leading //, since the volume actually exists

Code:
NAME                           USED  AVAIL     REFER  MOUNTPOINT
backuptank                    1.44T  1.19T     27.3M  /backup
backuptank/proxmox             478G  1.19T      340G  /backup/proxmox
backuptank/vm-100-disk-0      3.29G  1.19T     3.29G  -
backuptank/vm-101-disk-0      90.9G  1.19T     90.9G  -
backuptank/vm-104-disk-0      7.69G  1.19T     7.69G  -
backuptank/vm-104-disk-1      7.69G  1.19T     7.69G  -
backuptank/vm-105-disk-0       961M  1.19T      961M  -
datatank                      33.4T  20.3T     33.3T  /data
rpool                          307G   142G      168K  /rpool
rpool/ROOT                     132G   142G       96K  /rpool/ROOT
rpool/ROOT/pve-1               132G   142G      132G  /
rpool/data                     175G   142G      232K  /rpool/data
rpool/data/subvol-102-disk-1   738M  1.28G      738M  /rpool/data/subvol-102-disk-1
rpool/data/subvol-103-disk-1   490M  1.52G      490M  /rpool/data/subvol-103-disk-1
rpool/data/subvol-105-disk-0  1.07G  1019M     1.01G  /rpool/data/subvol-105-disk-0
rpool/data/subvol-106-disk-0  2.68G   142G     2.65G  /rpool/data/subvol-106-disk-0
rpool/data/subvol-109-disk-0  1.12G   142G     1.05G  /rpool/data/subvol-109-disk-0
rpool/data/subvol-110-disk-0  1.67G  8.33G     1.67G  /rpool/data/subvol-110-disk-0
rpool/data/subvol-111-disk-0   531M   142G      527M  /rpool/data/subvol-111-disk-0
rpool/data/subvol-112-disk-1   900M   142G      891M  /rpool/data/subvol-112-disk-1
rpool/data/subvol-113-disk-0   712M  4.30G      712M  /rpool/data/subvol-113-disk-0
rpool/data/subvol-114-disk-0  2.75G  3.25G     2.75G  /rpool/data/subvol-114-disk-0
rpool/data/subvol-115-disk-1  1.06G  1.94G     1.06G  /rpool/data/subvol-115-disk-1
rpool/data/subvol-116-disk-1   787M  1.23G      787M  /rpool/data/subvol-116-disk-1
rpool/data/subvol-117-disk-1  20.8G  9.17G     20.8G  /rpool/data/subvol-117-disk-1
rpool/data/subvol-118-disk-1  2.56G  15.4G     2.56G  /rpool/data/subvol-118-disk-1
rpool/data/subvol-119-disk-1   644M  1.37G      644M  /rpool/data/subvol-119-disk-1
rpool/data/subvol-120-disk-1   724M  1.29G      724M  /rpool/data/subvol-120-disk-1
rpool/data/subvol-123-disk-0  2.38G  7.62G     2.38G  /rpool/data/subvol-123-disk-0
rpool/data/subvol-124-disk-0   601M  3.41G      601M  /rpool/data/subvol-124-disk-0
rpool/data/subvol-125-disk-0   589M   142G      589M  /rpool/data/subvol-125-disk-0
rpool/data/subvol-126-disk-0   482M   142G      482M  /rpool/data/subvol-126-disk-0
rpool/data/subvol-127-disk-0   809M  3.21G      809M  /rpool/data/subvol-127-disk-0
rpool/data/subvol-129-disk-0  1.48G  2.52G     1.48G  /rpool/data/subvol-129-disk-0
rpool/data/subvol-130-disk-0   587M  3.43G      587M  /rpool/data/subvol-130-disk-0
rpool/data/subvol-131-disk-1  1.45G  6.55G     1.45G  /rpool/data/subvol-131-disk-1
rpool/data/subvol-133-disk-0   514M   142G      514M  /rpool/data/subvol-133-disk-0
rpool/data/subvol-134-disk-0   920M   142G      920M  /rpool/data/subvol-134-disk-0
rpool/data/subvol-137-disk-1  1.55G  2.45G     1.55G  /rpool/data/subvol-137-disk-1
rpool/data/subvol-138-disk-0  1.06G   142G     1.06G  /rpool/data/subvol-138-disk-0
rpool/data/subvol-139-disk-0   973M  3.05G      973M  /rpool/data/subvol-139-disk-0
rpool/data/vm-100-disk-0      2.62G   142G     2.62G  -

When I try to manually start the container, this happens:

Code:
/usr/bin/lxc-start -F -n 103
lxc-start: 103: conf.c: run_buffer: 323 Script exited with status 2
lxc-start: 103: start.c: lxc_init: 797 Failed to run lxc.hook.pre-start for container "103"
lxc-start: 103: start.c: __lxc_start: 1896 Failed to initialize container "103"
lxc-start: 103: conf.c: run_buffer: 323 Script exited with status 1
lxc-start: 103: start.c: lxc_end: 964 Failed to run lxc.hook.post-stop for container "103"
lxc-start: 103: tools/lxc_start.c: main: 308 The container failed to start
lxc-start: 103: tools/lxc_start.c: main: 314 Additional information can be obtained by setting the --logfile and --logpriority options

103.conf:
Code:
arch: amd64
cores: 2
hostname: openvpn
memory: 256
net0: name=eth0,bridge=vmbr0,hwaddr=DA:13:3A:0E:04:6B,ip=dhcp,ip6=auto,type=veth
onboot: 1
ostype: debian
rootfs: local-zfs:subvol-103-disk-1,size=2G
startup: order=3
swap: 0

storage.cfg:

Code:
dir: local
        path /var/lib/vz
        content vztmpl,iso,images,backup,snippets
        maxfiles 3
        shared 0

zfspool: local-zfs
        pool rpool/data
        content rootdir,images
        sparse 1

dir: datatank-storage
        path /data/proxmox-storage
        content vztmpl,backup,iso,images
        maxfiles 3
        nodes sjef,bakbeest
        shared 1

dir: backup
        path /backup/proxmox
        content backup
        maxfiles 3
        nodes sjonnie,nuc0
        shared 1

pbs: Historian
        datastore backuptank
        server 192.168.2......
        content backup
        fingerprint ....
        maxfiles 0
        username root@pam

lxc-start with debug option output as attachment
 

Attachments

  • debug-103.log
    18.7 KB · Views: 0
Last edited:
Okay, so apparently proxmox starts/configures/boots the containers before zfs is properly loaded. The directory /rpool/data/subvol-103-disk-0 just contained 3 empty folders (data, dev, proc).

When deleting the directory and using zfs mount to mount the volume, the container runs.

How do I make proxmox wait for all zfs volumes to be mounted?
 
Or better yet: what process do I kill so these folders don't get autocreated when I try to delete them while proxmox is running?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!