Cannot create LXC on ZFS

totalimpact

Renowned Member
Dec 12, 2010
132
18
83
I know this is a common issue, but I cant get it going with the typical settings. I can create on LVM, but not on zvol, nor ZFS file system. I have added a ZFS storage type on the storage page, and a directory to the dataset mount. When I try to move the drive from my lvm to the zfs dataset I get:

Code:
Warning, had trouble writing out superblocks.
TASK ERROR: command 'mkfs.ext4 -O mmp -E 'root_owner=0:0' /mnt/cts/images/361/vm-361-disk-1.raw' failed: exit code 144

proxmox-ve: 5.2-2 (running kernel: 4.15.18-1-pve)
pve-manager: 5.2-6 (running version: 5.2-6/bcd5f008)
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
zfsutils-linux: 0.7.9-pve1~bpo9

My storage config:
Code:
zfspool: VMdata1
        pool VMdata1
        content images,rootdir
        sparse 1
dir: VMdata1-dir
        path /mnt/VMdata1
        content rootdir,iso,vztmpl,backup
        maxfiles 2
        shared 0
dir: CTs
        path /mnt/cts
        content rootdir,vztmpl
        shared 0

zfs settings:
Code:
zfs get all VMdata1/cts
NAME         PROPERTY              VALUE                  SOURCE
VMdata1/cts  type                  filesystem             -
VMdata1/cts  creation              Tue Aug 14  8:35 2018  -
VMdata1/cts  used                  25K                    -
VMdata1/cts  available             885G                   -
VMdata1/cts  referenced            25K                    -
VMdata1/cts  compressratio         1.00x                  -
VMdata1/cts  mounted               yes                    -
VMdata1/cts  quota                 none                   default
VMdata1/cts  reservation           none                   default
VMdata1/cts  recordsize            128K                   default
VMdata1/cts  mountpoint            /mnt/cts               local
VMdata1/cts  sharenfs              off                    default
VMdata1/cts  checksum              on                     default
VMdata1/cts  compression           off                    default
VMdata1/cts  atime                 on                     local
VMdata1/cts  devices               on                     default
VMdata1/cts  exec                  on                     default
VMdata1/cts  setuid                on                     default
VMdata1/cts  readonly              off                    default
VMdata1/cts  zoned                 off                    default
VMdata1/cts  snapdir               hidden                 default
VMdata1/cts  aclinherit            restricted             default
VMdata1/cts  createtxg             72150                  -
VMdata1/cts  canmount              on                     default
VMdata1/cts  xattr                 on                     default
VMdata1/cts  copies                1                      default
VMdata1/cts  version               5                      -
VMdata1/cts  utf8only              off                    -
VMdata1/cts  normalization         none                   -
VMdata1/cts  casesensitivity       sensitive              -
VMdata1/cts  vscan                 off                    default
VMdata1/cts  nbmand                off                    default
VMdata1/cts  sharesmb              off                    default
VMdata1/cts  refquota              none                   default
VMdata1/cts  refreservation        none                   default
VMdata1/cts  guid                  268919584392915058     -
VMdata1/cts  primarycache          all                    default
VMdata1/cts  secondarycache        all                    default
VMdata1/cts  usedbysnapshots       0B                     -
VMdata1/cts  usedbydataset         25K                    -
VMdata1/cts  usedbychildren        0B                     -
VMdata1/cts  usedbyrefreservation  0B                     -
VMdata1/cts  logbias               latency                default
VMdata1/cts  dedup                 off                    default
VMdata1/cts  mlslabel              none                   default
VMdata1/cts  sync                  standard               default
VMdata1/cts  dnodesize             legacy                 default
VMdata1/cts  refcompressratio      1.00x                  -
VMdata1/cts  written               25K                    -
VMdata1/cts  logicalused           12.5K                  -
VMdata1/cts  logicalreferenced     12.5K                  -
VMdata1/cts  volmode               default                default
VMdata1/cts  filesystem_limit      none                   default
VMdata1/cts  snapshot_limit        none                   default
VMdata1/cts  filesystem_count      none                   default
VMdata1/cts  snapshot_count        none                   default
VMdata1/cts  snapdev               hidden                 default
VMdata1/cts  acltype               posixacl               local
VMdata1/cts  context               none                   default
VMdata1/cts  fscontext             none                   default
VMdata1/cts  defcontext            none                   default
VMdata1/cts  rootcontext           none                   default
VMdata1/cts  relatime              on                     local
VMdata1/cts  redundant_metadata    all                    default
VMdata1/cts  overlay               off                    default
 
"not work on zvol" - so it doesnt work on anything zfs, pool, nor dir. Moving to a zpool (storage type "ZFS", I get this error:

Task viewer: CT 361 - Move Volume
TASK ERROR: cannot open directory //VMdata1: No such file or directory

- VMdata1 is the destination (from LVM), why does it have double // (you can see its a pool in my storage config above).

Creating new one gives similar error:

Task viewer: CT 111 - Create
mounting container failed
TASK ERROR: cannot open directory //VMdata1: No such file or directory

Is one of these settings stopping me? like acls, or casesensitivity, read about these on other areas, but changing them has not helped-
Code:
root@pve1:~# zfs get all VMdata1
NAME     PROPERTY              VALUE                  SOURCE
VMdata1  type                  filesystem             -
VMdata1  creation              Fri Aug 10  8:35 2018  -
VMdata1  used                  495G                   -
VMdata1  available             885G                   -
VMdata1  referenced            25K                    -
VMdata1  compressratio         1.00x                  -
VMdata1  mounted               yes                    -
VMdata1  quota                 none                   default
VMdata1  reservation           none                   default
VMdata1  recordsize            128K                   default
VMdata1  mountpoint            /mnt/VMdata1           local
VMdata1  sharenfs              off                    default
VMdata1  checksum              on                     default
VMdata1  compression           on                     local
VMdata1  atime                 on                     default
VMdata1  devices               on                     default
VMdata1  exec                  on                     default
VMdata1  setuid                on                     default
VMdata1  readonly              off                    default
VMdata1  zoned                 off                    default
VMdata1  snapdir               hidden                 default
VMdata1  aclinherit            restricted             default
VMdata1  createtxg             1                      -
VMdata1  canmount              on                     default
VMdata1  xattr                 on                     default
VMdata1  copies                1                      default
VMdata1  version               5                      -
VMdata1  utf8only              off                    -
VMdata1  normalization         none                   -
VMdata1  casesensitivity       sensitive              -
VMdata1  vscan                 off                    default
VMdata1  nbmand                off                    default
VMdata1  sharesmb              off                    default
VMdata1  refquota              none                   default
VMdata1  refreservation        none                   default
VMdata1  guid                  18256766063709111678   -
VMdata1  primarycache          all                    default
VMdata1  secondarycache        all                    default
VMdata1  usedbysnapshots       0B                     -
VMdata1  usedbydataset         25K                    -
VMdata1  usedbychildren        495G                   -
VMdata1  usedbyrefreservation  0B                     -
VMdata1  logbias               latency                default
VMdata1  dedup                 off                    default
VMdata1  mlslabel              none                   default
VMdata1  sync                  standard               default
VMdata1  dnodesize             legacy                 default
VMdata1  refcompressratio      1.00x                  -
VMdata1  written               25K                    -
VMdata1  logicalused           493G                   -
VMdata1  logicalreferenced     12.5K                  -
VMdata1  volmode               default                default
VMdata1  filesystem_limit      none                   default
VMdata1  snapshot_limit        none                   default
VMdata1  filesystem_count      none                   default
VMdata1  snapshot_count        none                   default
VMdata1  snapdev               hidden                 default
VMdata1  acltype               off                    default
VMdata1  context               none                   default
VMdata1  fscontext             none                   default
VMdata1  defcontext            none                   default
VMdata1  rootcontext           none                   default
VMdata1  relatime              off                    default
VMdata1  redundant_metadata    all                    default
VMdata1  overlay               off                    default
 
Was reading this point (which I interpret it to mean he is using directory storage), so for giggles I set:
zfs set mountpoint=/mnt/VMdata1 VMdata1

Same error with double //
 
Hi total,
I've a little server with ZFS and lots of container that run on ZFS.
I've imported in PVE the storage as ZFS Storage (the whole pool, on the mount point) and all my CT run without problem.
If your storage is local you can do the same, otherwise you had to import via ZFS on ISCSI.
 
What version are you on? I think I have done the same as you say, it does not work on my version.
 
try not setting a mountpoint - PVE relies on the pool to have the default mountpoint.
 
I tried that:
root@pve1:~# zfs set mountpoint=none VMdata1
root@pve1:~# zfs mount VMdata1
cannot mount 'VMdata1': no mountpoint set

So the pool no longer has a mount point, and it is not listed as a mounted fs, but still same error when trying to send a CT to the pool with double // in front of pool name.
 
Sorry - I didn't phrase it exactly: The default-mountpoint of a ZFS dataset (if you just create the pool and try to use it) is '/<poolname'

Please set the mountpoint to '/VMData1' (and eventually get rid of the dir: storage entries)
 
That works - YEA!!!, but that is contradicting. I am not allowed to use zfs *directory* storage for CT, but the mount *directory* matters??

Maybe consider this a low priority adjustment in future code.... something like zfs get mountpoint $poolname in the pve code prior to sending a CT to it... Its just easier to stay organized when everthing is under /mnt.
 
glad it worked!

The reason why a directory storage on a ZFS dataset does not work for containers, is that ZFS does not O_DIRECT, which mke2fs needs while creating the filesystem on the raw device.

However when using the ZFSPool storage plugin PVE does not create a raw file, but a subvolume (a ZFS dataset) for storing the containers data.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!