[SOLVED] Container cannot be started, Disk cannot be mounted

jahknem

New Member
Dec 2, 2019
4
0
1
24
Hi! I'm having an issue with my ProxMox 6.0-7 install.
Initially I tried mounting an external NTFS Hard Drive. After having mounted (mount /dev/sdg1 /mnt) it I tried to unmount it as I had forgotten to install ntfs-3g beforehand. That hung up my console so I restarted the server. After restarting it I am not able to start my containers anymore.

When trying to start a container...
Code:
root@pve:~# systemctl start pve-container@115.service
Job for pve-container@115.service failed because the control process exited with error code.
See "systemctl status pve-container@115.service" and "journalctl -xe" for details.
... I get this:
Code:
root@pve:~# systemctl status pve-container@115.service
● pve-container@115.service - PVE LXC Container: 115
   Loaded: loaded (/lib/systemd/system/pve-container@.service; static; vendor preset: enabled)
   Active: failed (Result: exit-code) since Mon 2019-12-02 10:31:39 CET; 53s ago
     Docs: man:lxc-start
           man:lxc
           man:pct
  Process: 8289 ExecStart=/usr/bin/lxc-start -n 115 (code=exited, status=1/FAILURE)

Dec 02 10:31:39 pve systemd[1]: Starting PVE LXC Container: 115...
Dec 02 10:31:39 pve lxc-start[8289]: lxc-start: 115: lxccontainer.c: wait_on_daemonized_start: 856 No such file or directory - Failed to receive
Dec 02 10:31:39 pve lxc-start[8289]: lxc-start: 115: tools/lxc_start.c: main: 330 The container failed to start
Dec 02 10:31:39 pve lxc-start[8289]: lxc-start: 115: tools/lxc_start.c: main: 333 To get more details, run the container in foreground mode
Dec 02 10:31:39 pve lxc-start[8289]: lxc-start: 115: tools/lxc_start.c: main: 336 Additional information can be obtained by setting the --logfile
Dec 02 10:31:39 pve systemd[1]: pve-container@115.service: Control process exited, code=exited, status=1/FAILURE
Dec 02 10:31:39 pve systemd[1]: pve-container@115.service: Failed with result 'exit-code'.
Dec 02 10:31:39 pve systemd[1]: Failed to start PVE LXC Container: 115.
Also I am not able to mount the containers.
Code:
root@pve:~# pct mount 115
mounting container failed
cannot open directory //RAIDZ_1/subvol-115-disk-0: No such file or directory
However when looking at my zfs I can still see the disks:
Code:
root@pve:~# zfs list
NAME                        USED  AVAIL     REFER  MOUNTPOINT
RAIDZ_1                     544G  20.0T      163K  /RAIDZ_1
RAIDZ_1/subvol-101-disk-0   494G  3.42T      494G  /RAIDZ_1/subvol-101-disk-0
RAIDZ_1/subvol-102-disk-0   667M  7.35G      667M  /RAIDZ_1/subvol-102-disk-0
RAIDZ_1/subvol-103-disk-0   538M  49.5G      538M  /RAIDZ_1/subvol-103-disk-0
RAIDZ_1/subvol-104-disk-0  4.56G  3.44G     4.56G  /RAIDZ_1/subvol-104-disk-0
RAIDZ_1/subvol-105-disk-0   502M  7.51G      502M  /RAIDZ_1/subvol-105-disk-0
RAIDZ_1/subvol-106-disk-0   757M  99.3G      757M  /RAIDZ_1/subvol-106-disk-0
RAIDZ_1/subvol-107-disk-0  9.87G  54.1G     9.87G  /RAIDZ_1/subvol-107-disk-0
RAIDZ_1/subvol-115-disk-0   652M  3.91T      652M  /RAIDZ_1/subvol-115-disk-0
RAIDZ_1/vm-100-disk-0      33.0G  20.0T     6.95G  -
I don't know how mounting my external hard drive at /mnt seems to have done it however it seems to have.
It seems to me that this issue is similar to the one here: https://forum.proxmox.com/threads/lxc-container-creation-fails-task-error-cannot-open-directory-rpool-no-such-file-or-directory.42775/
However I am unsure how to remount the zfs at the correct location should it actually be at the wrong location and don't want to end up loosing my data. Can anyone help?
Thanks, Jan
 

Fabian_E

Proxmox Staff Member
Staff member
Aug 1, 2019
127
8
18
Hi,
could you share your '/etc/pve/storage.cfg', the configuration of the container and the output of 'pveversion -v'?
 

jahknem

New Member
Dec 2, 2019
4
0
1
24
The /etc/pve/storage:
[
dir: local
path /var/lib/vz
content backup,vztmpl,iso

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

dir: backup
path /mnt/pve/backup
content vztmpl,rootdir,backup,iso,snippets,images
is_mountpoint 1
nodes pve

zfspool: RAIDZ_1
pool RAIDZ_1
content rootdir,images
nodes pve
And pve version -v
root@pve:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.21-2-pve)
pve-manager: 6.0-7 (running version: 6.0-7/28984024)
pve-kernel-5.0: 6.0-8
pve-kernel-helper: 6.0-8
pve-kernel-5.0.21-2-pve: 5.0.21-6
pve-kernel-5.0.21-1-pve: 5.0.21-2
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.12-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-5
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-9
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-65
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-7
pve-cluster: 6.0-7
pve-container: 3.0-7
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-3
pve-qemu-kvm: 4.0.0-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-9
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve2
And if you mean by configuration of the container lxc-config -l..?

root@pve:~# lxc-config -l
lxc.default_config
lxc.lxcpath
lxc.bdev.lvm.vg
lxc.bdev.lvm.thin_pool
lxc.bdev.zfs.root
lxc.cgroup.use
lxc.cgroup.pattern
Thanks :)
 

Fabian_E

Proxmox Staff Member
Staff member
Aug 1, 2019
127
8
18
And if you mean by configuration of the container lxc-config -l..?
I meant the configuration for the specific container. The file should be '/etc/pve/lxc/115.conf'.
Could you share that and also run
Code:
pvesm path RAIDZ_1:subvol-115-disk-0
pvesm list RAIDZ_1
Can you access the subvol with
Code:
cd /RAIDZ_1/subvol-115-disk-0
?
 

jahknem

New Member
Dec 2, 2019
4
0
1
24
Output of: cat /etc/pve/lxc/115.conf
Code:
root@pve:~# cat /etc/pve/lxc/115.conf
arch: amd64
cores: 1
hostname: nginx
memory: 512
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.2.1,hwaddr=6E:47:B6:1D:FB:B4,ip=192.168.2.25/24,type=veth
onboot: 1
ostype: debian
rootfs: RAIDZ_1:subvol-115-disk-0,size=4000G
swap: 512
Output of: pvesm path RAIDZ_1:subvol-115-disk-0
Code:
root@pve:~# pvesm path RAIDZ_1:subvol-115-disk-0
/RAIDZ_1/subvol-115-disk-0
Output of: pvesm list RAIDZ_1
Code:
root@pve:~# pvesm list RAIDZ_1
RAIDZ_1:subvol-101-disk-0 subvol 4299090464605 101
RAIDZ_1:subvol-101-disk-0 subvol 4299090464605 101
RAIDZ_1:subvol-102-disk-0 subvol 8589934592 102
RAIDZ_1:subvol-102-disk-0 subvol 8589934592 102
RAIDZ_1:subvol-103-disk-0 subvol 53687091200 103
RAIDZ_1:subvol-103-disk-0 subvol 53687091200 103
RAIDZ_1:subvol-104-disk-0 subvol 8589934592 104
RAIDZ_1:subvol-104-disk-0 subvol 8589934592 104
RAIDZ_1:subvol-105-disk-0 subvol 8589934592 105
RAIDZ_1:subvol-105-disk-0 subvol 8589934592 105
RAIDZ_1:subvol-106-disk-0 subvol 107374182400 106
RAIDZ_1:subvol-106-disk-0 subvol 107374182400 106
RAIDZ_1:subvol-107-disk-0 subvol 68719476736 107
RAIDZ_1:subvol-107-disk-0 subvol 68719476736 107
RAIDZ_1:subvol-115-disk-0 subvol 4299090464605 115
RAIDZ_1:subvol-115-disk-0 subvol 4299090464605 115
RAIDZ_1:vm-100-disk-0       raw 34359738368 100
RAIDZ_1:vm-100-disk-0       raw 34359738368 100
And no, it seems I can't access the subvolume.. Is the data lost?
Code:
root@pve:~# cd /RAIDZ_1/subvol-115-disk-0
-bash: cd: /RAIDZ_1/subvol-115-disk-0: No such file or directory
 

Fabian_E

Proxmox Staff Member
Staff member
Aug 1, 2019
127
8
18
Is the subvolume mounted? You can check with
Code:
zfs get mounted,mountpoint RAIDZ_1/subvol-115-disk-0
or to see it for all datasets
Code:
zfs list -o name,mounted,mountpoint
And if it's not mounted, try
Code:
zfs mount RAIDZ_1/subvol-115-disk-0
 

jahknem

New Member
Dec 2, 2019
4
0
1
24
It wasn't mounted. I now mounted all containers and they can now start :) Will this survive a reboot?
Thanks!!
 

Fabian_E

Proxmox Staff Member
Staff member
Aug 1, 2019
127
8
18
Normally, when booting, ZFS will try to mount all datasets with a 'mountpoint' property and the property 'canmount' set to 'on'. See 'man zfs' for more information.
You can check whether the properties are set with
Code:
 zfs list -o name,canmount,mountpoint
and for the systemd service:
Code:
systemctl status zfs-mount.service
It should be enabled by default.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!