LXC container creation fails - TASK ERROR: cannot open directory //rpool: No such file or directory

Mar 19, 2018
27
1
6
48
I've had some issues with my Proxmox 5.1-41 install.

I had issues with ZFS where things weren't mounting, and the WebGUI wasn't working - which turned out to be a badly configured /etc/hosts file.

Once this was fixed, things worked, however now I can't get containers working.

during my fiddling to fix the issue, i think i must have stuffed up something with ZFS and how/where it mounts.

Now when i try to create a new container, on the last 'confirm' screen it has the following error:

Code:
mounting container failed
TASK ERROR: cannot open directory //rpool: No such file or directory

here is the output of zfs list:
Code:
NAME                           USED  AVAIL  REFER  MOUNTPOINT
rpool                         83.3G  2.66T    96K  /
rpool/ROOT                    4.33G  2.66T    96K  /ROOT
rpool/ROOT/pve-1              4.33G  2.66T  4.33G  /
rpool/data                    70.5G  2.66T    96K  /data
rpool/data/subvol-102-disk-1  1.70G  48.3G  1.70G  /data/subvol-102-disk-1
rpool/data/vm-100-disk-1      1.14G  2.66T  1.14G  -
rpool/data/vm-101-disk-1      2.21G  2.66T  2.21G  -
rpool/data/vm-101-disk-2      65.4G  2.66T  65.4G  -
rpool/swap                    8.50G  2.67T    56K  -

pvesm status gives:
Code:
Name             Type     Status           Total            Used       Available        %
local             dir     active      2857919744         4542720      2853377024    0.16%
local-zfs     zfspool     active      2927294988        73917920      2853377068    2.53%

if I manually edit my container /etc/pve/lxc/102.conf file and change the rootfs to match the MOUNTPOINT, it works fine - i.e.:

/etc/pve/lxc/102.conf below now works:

Code:
arch: amd64
cores: 2
cpulimit: 2
hostname: plex
memory: 8192
nameserver: 8.8.8.8 1.1.1.1
net0: name=eth0,bridge=vmbr0,hwaddr=36:AC:BC:B6:0B:44,ip=dhcp,type=veth
onboot: 1
ostype: archlinux
rootfs: /data/subvol-102-disk-1,size=50G
searchdomain: seb
startup: order=3
swap: 4096
lxc.hook.autodev: /var/lib/lxc/102/tuntap
lxc.cgroup.devices.allow: c 10:200 rwm

but prior to this issue, rootfs was listed as:
Code:
rootfs: local-zfs:subvol-102-disk-1,size=50G
which was working. But now I have to enter the exact mountpoint.

I have a feeling this issue is similar to what is outlined by Greg here: https://forum.proxmox.com/threads/cannot-restart-container.35869/

Is there some way I can re-configure proxmox to create containers with the current mountpoints?

Or otherwise get things back to how they should be?

Otherwise from here I can't create new containers from the Web GUI, and would need to do so by hand and explicitly state the rootfs.

VM's are working just fine., including the creation of new VM's. This issue is just with containers.

Any assistance greatly appreciated.


Seb
 
how does your /etc/pve/storage.cfg look like?
 
Hi dcsapak.

/etc/pve/storage.cfg
Code:
dir: local
    path /var/lib/vz
    content iso,vztmpl,backup

zfspool: local-zfs
    pool rpool/data
    sparse
    content images,rootdir
 
what does 'pvesm path local-zfs:subvol-102-disk-1' show ?
 
could you try to create a container from the commandline with e.g.

pct create <ID> <Template> --rootfs local-zfs:8

?
 
This is what happens:

Code:
root@proxmox:/# pct create 105 /var/lib/vz/template/cache/archlinux-base_20170704-1_amd64.tar.gz --rootfs local-zfs:8
mounting container failed
cannot open directory //rpool: No such file or directory
 
you need to set the mountpoint to the default one, e.g. for 'rpool/data' it needs to be '/rpool/data' (same for all children, but those should be inherited unless you manually messed with them)
 
OK. I can't recall messing with storage.cfg, but will change it to /rpool/data.

I assume a reboot would be in order?
 
OK. I can't recall messing with storage.cfg, but will change it to /rpool/data.

I assume a reboot would be in order?

you misunderstood me - your mountpoints in ZFS are set wrong:
Code:
rpool/data                    70.5G  2.66T    96K  /data
rpool/data/subvol-102-disk-1  1.70G  48.3G  1.70G  /data/subvol-102-disk-1

you need to do "zfs set mountpoint=/rpool/data rpool/data"
 
Ahh OK, yes, that appears to have fixed the creation of new containers.

Curiously, now reverting my 102.conf back to what it was for rootfs is not working, but I'll dig into the logs and see what is going on. Now it is failing to start. lol.

Thanks so much for the assistance. Much appreciate the patience with a proxmox newb like me.
 
you misunderstood me - your mountpoints in ZFS are set wrong:
Code:
rpool/data                    70.5G  2.66T    96K  /data
rpool/data/subvol-102-disk-1  1.70G  48.3G  1.70G  /data/subvol-102-disk-1

you need to do "zfs set mountpoint=/rpool/data rpool/data"

Hmm... @fabian stating that "ZFS mountpoints" are wrong, is a wrong statement.

When you create a ZFS, the mountpoint can be "anything", and specifically here I was running in similar issue. The problem being that it's not the "expected" mountpoint, would be correct, but as rpool is the "root pool" for a ZFS on / pool, and then to have things mounted from / instead of /rpool, is the "correct" way.

I'd rather advise ProxMox to consider using/checking/etc. (perhaps even forcing to a more "proxmox specific directory"??) the actual mountpoint, and not to ASSUME it's based on the ZFS zpool name?


Code:
root@hvtest:~# zfs list
NAME                                   USED  AVAIL  REFER  MOUNTPOINT
rpool                                 5.62G   109G    96K  /
rpool/Containers                        96K   109G    96K  /rpool/Containers
rpool/ROOT                            3.58G   109G    96K  none
rpool/ROOT/debian                     3.58G   109G  3.30G  /
rpool/home                            1.20G   109G  4.45M  /home
rpool/home/root                        144K   109G   144K  /root
rpool/home/vm-100-disk-0               207M   109G   159M  -
rpool/home/vm-100-disk-1              46.3M   109G  26.0M  -
rpool/home/vm-100-state-afterRestore   394M   109G   394M  -
rpool/home/vm-100-state-niceSetup      514M   109G   514M  -
rpool/home/vm-101-disk-0              61.3M   109G  61.3M  -
rpool/var                              858M   109G    96K  /var
rpool/var/cache                        847M   109G   847M  /var/cache
rpool/var/log                         9.66M   109G  9.66M  /var/log
rpool/var/spool                        676K   109G   676K  /var/spool
rpool/var/tmp                          168K   109G   168K  /var/tmp
root@hvtest:~#

Code:
root@hvtest:~# zfs get -H -o value mountpoint rpool/home
/home
 
I never said that you cannot set the mountpoint to other values - but PVE only supports the default one (which allows us to statically determine the mount path instead of having to ask ZFS for each volume, which is a nice performance short cut).
 
yeah, nice "performance short cut", but you then need to consider the root ZFS installations and document that perhaps somewhere??
 
Having just run into this issue yet again, the reason for setting "mountpoint=none" on the zpool, is that it doesn't accidentally gets "polluted" with files/etc. and that those directories that is actually needed (like the backups, templates and ISOs images) could be mounted specifically and still have a hierarchy of zfs data/volumes/etc. where one hierarchy is compressed, and others aren't for example.
 
Running into this too, my setup was:

storage.cfg:

Code:
dir: local
    path /var/lib/vz
    content vztmpl,snippets,rootdir,backup,iso,images
    maxfiles 1
    shared 0

zfspool: local-vmdata
    pool rpool/pve-data/vmdata
    content rootdir,images
    sparse 1

dir: local-images
    path /pve-data/images
    content vztmpl,iso
    is_mountpoint yes
    mkdir 0

dir: local-backup
    path /pve-data/backups
    content backup
    is_mountpoint yes
    maxfiles 3
    mkdir 0
    shared 0

zfs list:

Code:
rpool/pve-data          2.91G   852G      112K  /pve-data
rpool/pve-data/backups    96K   852G       96K  /pve-data/backups
rpool/pve-data/images   2.91G   852G     2.91G  /pve-data/images
rpool/pve-data/vmdata     96K   852G       96K  /pve-data/vmdata

Rather than having to query ZFS each time, what about allowing storage.cfg to include an option to say where it's mounted? How much of a performance shortcut is this really anyways? This only changes when storage config is reloaded.


Edit: it also doesn't seem like this mountpoint requirement is documented anywhere (e.g. https://pve.proxmox.com/pve-docs/chapter-pvesm.html or https://pve.proxmox.com/wiki/Storage:_ZFS)


Edit 2: digging in a bit more, I am trying to see how this works for a default ZFS install, but it doesn't seem consistent with the code I've found.

proxmox installer sets up storage.cfg here for a zfspool: https://git.proxmox.com/?p=pve-inst...5622a2a609f6b25c415b670267fb3d6ef;hb=HEAD#l53
and I don't see anywhere where it sets the mountpoint to "/$zfspoolname/data" as suggested is needed by this thread.

@fabian can you point me in the right direction?

Edit 3: my rpool install may be a bit non-standard, I don't see anywhere where the installer sets $zfspoolname mountpoint to /, that must be a "me" thing.
 
Last edited:
Running into this too, my setup was:

storage.cfg:

Code:
dir: local
    path /var/lib/vz
    content vztmpl,snippets,rootdir,backup,iso,images
    maxfiles 1
    shared 0

zfspool: local-vmdata
    pool rpool/pve-data/vmdata
    content rootdir,images
    sparse 1

dir: local-images
    path /pve-data/images
    content vztmpl,iso
    is_mountpoint yes
    mkdir 0

dir: local-backup
    path /pve-data/backups
    content backup
    is_mountpoint yes
    maxfiles 3
    mkdir 0
    shared 0

zfs list:

Code:
rpool/pve-data          2.91G   852G      112K  /pve-data
rpool/pve-data/backups    96K   852G       96K  /pve-data/backups
rpool/pve-data/images   2.91G   852G     2.91G  /pve-data/images
rpool/pve-data/vmdata     96K   852G       96K  /pve-data/vmdata

Rather than having to query ZFS each time, what about allowing storage.cfg to include an option to say where it's mounted? How much of a performance shortcut is this really anyways? This only changes when storage config is reloaded.

see https://bugzilla.proxmox.com/show_bug.cgi?id=2085 ;)

Edit: it also doesn't seem like this mountpoint requirement is documented anywhere (e.g. https://pve.proxmox.com/pve-docs/chapter-pvesm.html or https://pve.proxmox.com/wiki/Storage:_ZFS)

fair enough.

Edit 2: digging in a bit more, I am trying to see how this works for a default ZFS install, but it doesn't seem consistent with the code I've found.

proxmox installer sets up storage.cfg here for a zfspool: https://git.proxmox.com/?p=pve-inst...5622a2a609f6b25c415b670267fb3d6ef;hb=HEAD#l53
and I don't see anywhere where it sets the mountpoint to "/$zfspoolname/data" as suggested is needed by this thread.

@fabian can you point me in the right direction?

Edit 3: my rpool install may be a bit non-standard, I don't see anywhere where the installer sets $zfspoolname mountpoint to /, that must be a "me" thing.

you don't need to set anything, the default mountpoint for a dataset is /$pool/$dataset. you only need to set mountpoint on your storage dataset if you set the mountpoint property on any of the parent datasets to something non-standard.
 

Thanks for the link. I'll be sure to follow that.

you don't need to set anything, the default mountpoint for a dataset is /$pool/$dataset. you only need to set mountpoint on your storage dataset if you set the mountpoint property on any of the parent datasets to something non-standard.

My zfs on root install is non-standard as far as Proxmox is concerned. However, it is somewhat standard in the general ZoL world, as I used the guides from Debian/Ubuntu heavily as a starting point.

That explains why I had to set the mountpoint manually for my rpool/pve-data dataset.

Thanks :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!