lxc containers on zfs storage wont start.

kroem

Well-Known Member
Jul 12, 2016
45
0
46
39
I'm having issues to run lxc containers on normal zfs storage. Just a fresh lxc, no modifications.

Tried first on my main zfs storage mount, with both images and containers, but wont start.
Then I added a separate "container" mount on the same zpool, but lxc wont start.

Is there a known issue with lxc + zfs?



root@cat:~# lxc-start -n 106 --logfile /root/zfstest.log
lxc-start: tools/lxc_start.c: main: 344 The container failed to start.
lxc-start: tools/lxc_start.c: main: 346 To get more details, run the container in foreground mode.
lxc-start: tools/lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.
root@cat:~# cat /root/zfstest.log
lxc-start 20161230105004.029 ERROR lxc_conf - conf.c:run_buffer:347 - Script exited with status 1
lxc-start 20161230105004.029 ERROR lxc_start - start.c:lxc_init:465 - failed to run pre-start hooks for container '106'.
lxc-start 20161230105004.029 ERROR lxc_start - start.c:__lxc_start:1313 - failed to initialize the container
lxc-start 20161230105009.034 ERROR lxc_start_ui - tools/lxc_start.c:main:344 - The container failed to start.
lxc-start 20161230105009.034 ERROR lxc_start_ui - tools/lxc_start.c:main:346 - To get more details, run the container in foreground mode.
lxc-start 20161230105009.034 ERROR lxc_start_ui - tools/lxc_start.c:main:348 - Additional information can be obtained by setting the --logfile and --logpriority options.
lxc-start 20161230105446.041 ERROR lxc_conf - conf.c:run_buffer:347 - Script exited with status 1
lxc-start 20161230105446.041 ERROR lxc_start - start.c:lxc_init:465 - failed to run pre-start hooks for container '106'.
lxc-start 20161230105446.041 ERROR lxc_start - start.c:__lxc_start:1313 - failed to initialize the container
lxc-start 20161230105446.041 ERROR lxc_start_ui - tools/lxc_start.c:main:344 - The container failed to start.
lxc-start 20161230105446.041 ERROR lxc_start_ui - tools/lxc_start.c:main:348 - Additional information can be obtained by setting the --logfile and --logpriority options.
lxc-start 20161230105648.554 ERROR lxc_conf - conf.c:run_buffer:347 - Script exited with status 1
lxc-start 20161230105648.554 ERROR lxc_start - start.c:lxc_init:465 - failed to run pre-start hooks for container '106'.
lxc-start 20161230105648.554 ERROR lxc_start - start.c:__lxc_start:1313 - failed to initialize the container
lxc-start 20161230105653.559 ERROR lxc_start_ui - tools/lxc_start.c:main:344 - The container failed to start.
lxc-start 20161230105653.559 ERROR lxc_start_ui - tools/lxc_start.c:main:346 - To get more details, run the container in foreground mode.
lxc-start 20161230105653.559 ERROR lxc_start_ui - tools/lxc_start.c:main:348 - Additional information can be obtained by setting the --logfile and --logpriority options.
lxc-start 20161230114502.639 ERROR lxc_conf - conf.c:run_buffer:347 - Script exited with status 1
lxc-start 20161230114502.639 ERROR lxc_start - start.c:lxc_init:465 - failed to run pre-start hooks for container '106'.
lxc-start 20161230114502.639 ERROR lxc_start - start.c:__lxc_start:1313 - failed to initialize the container
lxc-start 20161230114507.639 ERROR lxc_start_ui - tools/lxc_start.c:main:344 - The container failed to start.
lxc-start 20161230114507.639 ERROR lxc_start_ui - tools/lxc_start.c:main:346 - To get more details, run the container in foreground mode.
lxc-start 20161230114507.639 ERROR lxc_start_ui - tools/lxc_start.c:main:348 - Additional information can be obtained by setting the --logfile and --logpriority options.




root@cat:~# pvesm status
local dir 1 32670208 1095040 31575168 3.85%
local-zfs zfspool 1 31575288 96 31575192 0.50%
nfs_vol1_VM nfs 1 10961676288 23905280 10937771008 0.72%
nfs_vol1_backups nfs 1 12058414080 1120643072 10937771008 9.79%
zfs_lxc zfspool 1 12692242596 1754471673 10937770922 14.32%
zfs_vol1 zfspool 1 12692242596 1754471673 10937770922 14.32%
 
What output do you get when you start in foreground:

# lxc-start -n 106 -F

Please can you also post the container configuration (/etc/pve/lxc/106.conf)?
 
What output do you get when you start in foreground:

# lxc-start -n 106 -F

Please can you also post the container configuration (/etc/pve/lxc/106.conf)?
root@cat:~# lxc-start -n 106 -F
unable to open file '/fastboot.tmp.21495' - No such file or directory
error in setup task PVE::LXC::Setup::pre_start_hook
lxc-start: conf.c: run_buffer: 347 Script exited with status 1
lxc-start: start.c: lxc_init: 465 failed to run pre-start hooks for container '106'.
lxc-start: start.c: __lxc_start: 1313 failed to initialize the container
lxc-start: tools/lxc_start.c: main: 344 The container failed to start.
lxc-start: tools/lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.
root@cat:~# cat /etc/pve/lxc/106.conf
arch: amd64
cpulimit: 1
cpuunits: 1024
hostname: lxc-LEECH
memory: 512
net0: name=eth0,bridge=vmbr0,hwaddr=CA:69:6C:5F:0C:67,ip=dhcp,ip6=dhcp,type=veth
ostype: debian
rootfs: zfs_lxc:subvol-106-disk-1,size=15G
swap: 512

Thank for you prompt response!
 
And storage zfs_lxc is a zfspool type storage? The error message indicates that the subvol is read-only? Normally this should be mounted with read/write permissions.
 
And storage zfs_lxc is a zfspool type storage? The error message indicates that the subvol is read-only? Normally this should be mounted with read/write permissions.
Yes, it's a zpool with zfs storage added via pve GUI. I dont really know how to choose read-only or read-write, but I have other (qemu) vm's on the same zfs-storage and they work.


( actually, I tested on my "main" zvol zfs_vol1 first (where I have my other VM's), but the config is the same on zfs_lxc )
 
actually, I tested on my "main" zvol zfs_vol1 first (where I have my other VM's), but the config is the same on zfs_lxc

And it points to a zfs sub-volume (not the zfs root volume which is mounted on a non-stanard mount point)?
 
And it points to a zfs sub-volume (not the zfs root volume which is mounted on a non-stanard mount point)?
Yes, the configuration and set-up looked exactly like it does on the above zfs_lxc.

Still haven't found any ro/rw settings...
 
And you use default zfs mount points, i.e. the subvol is mounted at

/yourpool/yoursubvolume/subvol-106-disk-1/
Yes, as far as I can see. I created a new lxc now, 200, just to test and make sure there was no like overwritten information.

root@cat:~# ls -lv /vol1/VM/subvol-200-disk-1/
total 112
drwxr-xr-x 2 root root 108 Apr 20 2016 bin
drwxr-xr-x 2 root root 2 Mar 14 2016 boot
drwxr-xr-x 2 root root 2 Dec 10 2008 dev
drwxr-xr-x 67 root root 149 Dec 30 16:07 etc
drwxr-xr-x 2 root root 2 Mar 14 2016 home
drwxr-xr-x 12 root root 23 Apr 20 2016 lib
drwxr-xr-x 2 root root 3 Apr 20 2016 lib64
drwxr-xr-x 2 root root 2 Apr 20 2016 media
drwxr-xr-x 2 root root 2 Apr 20 2016 mnt
drwxr-xr-x 2 root root 2 Apr 20 2016 opt
drwxr-xr-x 2 root root 2 Mar 14 2016 proc
drwx------ 2 root root 4 Apr 20 2016 root
drwxr-xr-x 4 root root 6 Apr 20 2016 run
drwxr-xr-x 2 root root 109 Apr 20 2016 sbin
drwxr-xr-x 2 root root 2 Apr 20 2016 srv
drwxr-xr-x 2 root root 2 Mar 14 2016 sys
drwxrwxrwt 2 root root 2 Apr 20 2016 tmp
drwxr-xr-x 10 root root 10 Apr 20 2016 usr
drwxr-xr-x 11 root root 13 Apr 20 2016 var
root@cat:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
EVO 22.5G 213G 11.9G /EVO
EVO/EVO 19K 202G 19K /EVO/EVO
rpool 5.97G 30.1G 96K /rpool
rpool/ROOT 1.04G 30.1G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.04G 30.1G 1.04G /
rpool/data 96K 30.1G 96K /rpool/data
rpool/swap 4.92G 31.4G 3.58G -
vol1 9.77T 12.2T 478K /vol1
vol1/VM 1.68T 10.2T 22.8G /vol1/VM
vol1/VM/subvol-106-disk-1 453M 14.6G 452M /vol1/VM/subvol-106-disk-1
vol1/VM/subvol-200-disk-1 452M 19.6G 452M /vol1/VM/subvol-200-disk-1
vol1/VM/vm-100-disk-1 35.1G 10.2T 12.2G -
vol1/VM/vm-101-disk-1 189G 10.3T 111G -
vol1/VM/vm-101-disk-2 672G 10.5T 360G -
vol1/VM/vm-102-disk-1 104G 10.3T 42.4G -
vol1/VM/vm-103-disk-1 242G 10.3T 155G -
vol1/VM/vm-104-disk-1 131G 10.2T 72.3G -
vol1/VM/vm-105-disk-1 172G 10.3T 110G -
vol1/VM/vm-107-disk-1 38.6G 10.2T 4.04G -
vol1/VM/vm-111-disk-1 52.4G 10.2T 25.3G -
vol1/VM/vm-112-disk-1 58.5G 10.2T 15.8G -
 
And what is the output of

# zpool list
root@cat:~# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
EVO 232G 11.9G 220G - 23% 5% 1.00x ONLINE -
rpool 37.2G 4.97G 32.3G - 23% 13% 1.00x ONLINE -
vol1 29T 9.98T 19.0T 16.0E 11% 34% 1.00x ONLINE -
 
The root fs is not marked as read-only. (I think someone posted and removed the post :) )
 
I missed the config output from one of the posts above and didn't expect you to reply so quickly ;-)
Also the error does not actually indicate a read-only mount as it says "no such file or directory" which is pretty awkward for when trying to create a file in the root directory of a container.
The error code can mean a few things though, and probably more with ZFS.
Could you perhaps also run the first command of your first post with `-lDEBUG` added and watch the journal during the run? (`# journalctl -f`)
Can you describe the installation a bit more? Did you use the PVE installer? Did you install any additional programs / services? Does this happen for all containers? Do they work on non-ZFS storage? Did you change anything in `/etc/lxc`

(And could you please use [/code] tags around program output pastes? Makes it far easier to read, especially if they contain aligned columns in the output or symbols which the forum converts to emojis ;-) .)
 
I missed the config output from one of the posts above and didn't expect you to reply so quickly ;-)
Hehe, I want it working so monitoring the notifications ;)

(also, pretty slow day at work...)

Also the error does not actually indicate a read-only mount as it says "no such file or directory" which is pretty awkward for when trying to create a file in the root directory of a container.
The error code can mean a few things though, and probably more with ZFS.
Could you perhaps also run the first command of your first post with `-lDEBUG` added and watch the journal during the run? (`# journalctl -f`)

Hmm, that did not work. Might be it?
Code:
root@cat:~# lxc-start -n 106 --logfile /root/zfstest2.log --logpriority=DEBUG
lxc-start: tools/lxc_start.c: main: 295 Executing '/sbin/init' with no configuration file may crash the host

Can you describe the installation a bit more? Did you use the PVE installer? Did you install any additional programs / services? Does this happen for all containers? Do they work on non-ZFS storage?

(And could you please use [/code] tags around program output pastes? Makes it far easier to read, especially if they contain aligned columns in the output.)
It's installed from pve-iso, I do have some additional services but they should not really affect zfs. I do have znapzend, but this issue was not working before I started using znapzend.
I never had a container working on zfs, so I store my container(s) on a nfs share from the same zfs pool, which works.

The zpool is generated and exported/imported from a OmniOS machine, but I have tested and verified that it's not working on a newly created zpool on the pve machine ( called "EVO" in zpool list above )
 
Have you touched the `/var/lib/lxc/106/config` file at some point?
Can you paste it? Then do a `pct start 106` to have it restored and see if it changed. (If it did then try running the above `lxc-start` command again...)
 
Have you touched the `/var/lib/lxc/106/config` file at some point?
Can you paste it? Then do a `pct start 106` to have it restored and see if it changed. (If it did then try running the above `lxc-start` command again...)
Ok so I mustv'e been tired yesterday, just realized that I during troubleshooting decided to start testing with VM-id 200 instead (to do a totally fresh LXC with a new number (thought if that could mess stuff up)) so the log output from yesterday is incorrect, and misleading - sorry.

Here's the correct config:
Code:
root@cat:~# cat /etc/pve/lxc/200.conf
arch: amd64
cpulimit: 1
cpuunits: 1024
hostname: LEECH
memory: 512
net0: name=eth0,bridge=vmbr0,hwaddr=1A:6D:9E:66:AE:62,type=veth
ostype: debian
rootfs: zfs_vol1:subvol-200-disk-1,size=20G
swap: 512


root@cat:~# cat /var/lib/lxc/200/config
lxc.arch = amd64
lxc.include = /usr/share/lxc/config/debian.common.conf
lxc.monitor.unshare = 1
lxc.tty = 2
lxc.environment = TERM=linux
lxc.utsname = LEECH
lxc.cgroup.memory.limit_in_bytes = 536870912
lxc.cgroup.memory.memsw.limit_in_bytes = 1073741824
lxc.cgroup.cpu.cfs_period_us = 100000
lxc.cgroup.cpu.cfs_quota_us = 100000
lxc.cgroup.cpu.shares = 1024
lxc.rootfs = /var/lib/lxc/200/rootfs
lxc.network.type = veth
lxc.network.veth.pair = veth200i0
lxc.network.hwaddr = 1A:6D:9E:66:AE:62
lxc.network.name = eth0


root@cat:~# zfs list | grep 200
vol1/VM/subvol-200-disk-1   453M  19.6G   452M  /vol1/VM/subvol-200-disk-1

root@cat:~# ls -lv /vol1/VM/subvol-200-disk-1/
total 112
drwxr-xr-x  2 root root 108 Apr 20  2016 bin
drwxr-xr-x  2 root root   2 Mar 14  2016 boot
drwxr-xr-x  2 root root   2 Dec 10  2008 dev
drwxr-xr-x 67 root root 149 Dec 30 16:07 etc
drwxr-xr-x  2 root root   2 Mar 14  2016 home
drwxr-xr-x 12 root root  23 Apr 20  2016 lib
drwxr-xr-x  2 root root   3 Apr 20  2016 lib64
drwxr-xr-x  2 root root   2 Apr 20  2016 media
drwxr-xr-x  2 root root   2 Apr 20  2016 mnt
drwxr-xr-x  2 root root   2 Apr 20  2016 opt
drwxr-xr-x  2 root root   2 Mar 14  2016 proc
drwx------  2 root root   4 Apr 20  2016 root
drwxr-xr-x  4 root root   6 Apr 20  2016 run
drwxr-xr-x  2 root root 109 Apr 20  2016 sbin
drwxr-xr-x  2 root root   2 Apr 20  2016 srv
drwxr-xr-x  2 root root   2 Mar 14  2016 sys
drwxrwxrwt  2 root root   2 Apr 20  2016 tmp
drwxr-xr-x 10 root root  10 Apr 20  2016 usr
drwxr-xr-x 11 root root  13 Apr 20  2016 var

and logs:
Code:
root@cat:~# lxc-start -n 200 -F --logfile /root/zfstest200.log --logpriority=DEBUG
unable to open file '/fastboot.tmp.16157' - No such file or directory
error in setup task PVE::LXC::Setup::pre_start_hook
lxc-start: conf.c: run_buffer: 347 Script exited with status 1
lxc-start: start.c: lxc_init: 465 failed to run pre-start hooks for container '200'.
lxc-start: start.c: __lxc_start: 1313 failed to initialize the container
lxc-start: tools/lxc_start.c: main: 344 The container failed to start.
lxc-start: tools/lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.
root@cat:~# cat /root/zfstest200.log
      lxc-start 20170103080944.437 INFO     lxc_start_ui - tools/lxc_start.c:main:264 - using rcfile /var/lib/lxc/200/config
      lxc-start 20170103080944.438 WARN     lxc_confile - confile.c:config_pivotdir:1879 - lxc.pivotdir is ignored.  It will soon become an error.
      lxc-start 20170103080944.438 INFO     lxc_lsm - lsm/lsm.c:lsm_init:48 - LSM security driver AppArmor
      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:parse_config_v2:342 - processing: .reject_force_umount  # comment this to allow umount -f;  not recommended.
      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:parse_config_v2:446 - Adding native rule for reject_force_umount action 0
      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:do_resolve_add_rule:216 - Setting seccomp rule to reject force umounts

      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:parse_config_v2:449 - Adding compat rule for reject_force_umount action 0
      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:do_resolve_add_rule:216 - Setting seccomp rule to reject force umounts

      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:parse_config_v2:342 - processing: .[all].
      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:parse_config_v2:342 - processing: .kexec_load errno 1.
      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:parse_config_v2:446 - Adding native rule for kexec_load action 327681
      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:parse_config_v2:449 - Adding compat rule for kexec_load action 327681
      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:parse_config_v2:342 - processing: .open_by_handle_at errno 1.
      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:parse_config_v2:446 - Adding native rule for open_by_handle_at action 327681
      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:parse_config_v2:449 - Adding compat rule for open_by_handle_at action 327681
      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:parse_config_v2:342 - processing: .init_module errno 1.
      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:parse_config_v2:446 - Adding native rule for init_module action 327681
      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:parse_config_v2:449 - Adding compat rule for init_module action 327681
      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:parse_config_v2:342 - processing: .finit_module errno 1.
      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:parse_config_v2:446 - Adding native rule for finit_module action 327681
      lxc-start 20170103080944.439 WARN     lxc_seccomp - seccomp.c:do_resolve_add_rule:233 - Seccomp: got negative # for syscall: finit_module
      lxc-start 20170103080944.439 WARN     lxc_seccomp - seccomp.c:do_resolve_add_rule:234 - This syscall will NOT be blacklisted
      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:parse_config_v2:449 - Adding compat rule for finit_module action 327681
      lxc-start 20170103080944.439 WARN     lxc_seccomp - seccomp.c:do_resolve_add_rule:233 - Seccomp: got negative # for syscall: finit_module
      lxc-start 20170103080944.439 WARN     lxc_seccomp - seccomp.c:do_resolve_add_rule:234 - This syscall will NOT be blacklisted
      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:parse_config_v2:342 - processing: .delete_module errno 1.
      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:parse_config_v2:446 - Adding native rule for delete_module action 327681
      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:parse_config_v2:449 - Adding compat rule for delete_module action 327681
      lxc-start 20170103080944.439 INFO     lxc_seccomp - seccomp.c:parse_config_v2:456 - Merging in the compat seccomp ctx into the main one
      lxc-start 20170103080944.439 INFO     lxc_conf - conf.c:run_script_argv:367 - Executing script '/usr/share/lxc/hooks/lxc-pve-prestart-hook' for container '200', config section 'lxc'
      lxc-start 20170103080944.911 ERROR    lxc_conf - conf.c:run_buffer:347 - Script exited with status 1
      lxc-start 20170103080944.912 ERROR    lxc_start - start.c:lxc_init:465 - failed to run pre-start hooks for container '200'.
      lxc-start 20170103080944.912 ERROR    lxc_start - start.c:__lxc_start:1313 - failed to initialize the container
      lxc-start 20170103080944.912 ERROR    lxc_start_ui - tools/lxc_start.c:main:344 - The container failed to start.
      lxc-start 20170103080944.912 ERROR    lxc_start_ui - tools/lxc_start.c:main:348 - Additional information can be obtained by setting the --logfile and --logpriority options.



root@cat:~# pct start 200
lxc-start: tools/lxc_start.c: main: 344 The container failed to start.
lxc-start: tools/lxc_start.c: main: 346 To get more details, run the container in foreground mode.
lxc-start: tools/lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.
command 'lxc-start -n 200' failed: exit code 1
root@cat:~# lxc-start -n 200 -F --logfile /root/zfstest200.log --logpriority=DEBUG
unable to open file '/fastboot.tmp.17172' - No such file or directory
error in setup task PVE::LXC::Setup::pre_start_hook
lxc-start: conf.c: run_buffer: 347 Script exited with status 1
lxc-start: start.c: lxc_init: 465 failed to run pre-start hooks for container '200'.
lxc-start: start.c: __lxc_start: 1313 failed to initialize the container
lxc-start: tools/lxc_start.c: main: 344 The container failed to start.
lxc-start: tools/lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.
 
what does "zfs get all vol1/VM/subvol-200-disk-1" output?
 
what does "zfs get all vol1/VM/subvol-200-disk-1" output?
Code:
root@cat:~# zfs get all vol1/VM/subvol-200-disk-1
NAME                       PROPERTY                    VALUE                                                        SOURCE
vol1/VM/subvol-200-disk-1  type                        filesystem                                                   -
vol1/VM/subvol-200-disk-1  creation                    Fri Dec 30 16:07 2016                                        -
vol1/VM/subvol-200-disk-1  used                        453M                                                         -
vol1/VM/subvol-200-disk-1  available                   19.6G                                                        -
vol1/VM/subvol-200-disk-1  referenced                  452M                                                         -
vol1/VM/subvol-200-disk-1  compressratio               1.77x                                                        -
vol1/VM/subvol-200-disk-1  mounted                     yes                                                          -
vol1/VM/subvol-200-disk-1  quota                       none                                                         default
vol1/VM/subvol-200-disk-1  reservation                 none                                                         default
vol1/VM/subvol-200-disk-1  recordsize                  128K                                                         default
vol1/VM/subvol-200-disk-1  mountpoint                  /vol1/VM/subvol-200-disk-1                                   inherited from vol1
vol1/VM/subvol-200-disk-1  sharenfs                    on                                                           inherited from vol1/VM
vol1/VM/subvol-200-disk-1  checksum                    on                                                           default
vol1/VM/subvol-200-disk-1  compression                 lz4                                                          inherited from vol1
vol1/VM/subvol-200-disk-1  atime                       off                                                          inherited from vol1/VM
vol1/VM/subvol-200-disk-1  devices                     on                                                           default
vol1/VM/subvol-200-disk-1  exec                        on                                                           default
vol1/VM/subvol-200-disk-1  setuid                      on                                                           default
vol1/VM/subvol-200-disk-1  readonly                    off                                                          default
vol1/VM/subvol-200-disk-1  zoned                       off                                                          default
vol1/VM/subvol-200-disk-1  snapdir                     hidden                                                       inherited from vol1/VM
vol1/VM/subvol-200-disk-1  aclinherit                  passthrough                                                  inherited from vol1/VM
vol1/VM/subvol-200-disk-1  canmount                    on                                                           default
vol1/VM/subvol-200-disk-1  xattr                       sa                                                           local
vol1/VM/subvol-200-disk-1  copies                      1                                                            default
vol1/VM/subvol-200-disk-1  version                     5                                                            -
vol1/VM/subvol-200-disk-1  utf8only                    on                                                           -
vol1/VM/subvol-200-disk-1  normalization               formD                                                        -
vol1/VM/subvol-200-disk-1  casesensitivity             insensitive                                                  -
vol1/VM/subvol-200-disk-1  vscan                       off                                                          default
vol1/VM/subvol-200-disk-1  nbmand                      on                                                           inherited from vol1/VM
vol1/VM/subvol-200-disk-1  sharesmb                    on                                                           inherited from vol1/VM
vol1/VM/subvol-200-disk-1  refquota                    20G                                                          local
vol1/VM/subvol-200-disk-1  refreservation              none                                                         default
vol1/VM/subvol-200-disk-1  primarycache                all                                                          default
vol1/VM/subvol-200-disk-1  secondarycache              all                                                          default
vol1/VM/subvol-200-disk-1  usedbysnapshots             444K                                                         -
vol1/VM/subvol-200-disk-1  usedbydataset               452M                                                         -
vol1/VM/subvol-200-disk-1  usedbychildren              0                                                            -
vol1/VM/subvol-200-disk-1  usedbyrefreservation        0                                                            -
vol1/VM/subvol-200-disk-1  logbias                     latency                                                      default
vol1/VM/subvol-200-disk-1  dedup                       off                                                          default
vol1/VM/subvol-200-disk-1  mlslabel                    none                                                         default
vol1/VM/subvol-200-disk-1  sync                        standard                                                     inherited from vol1/VM
vol1/VM/subvol-200-disk-1  refcompressratio            1.77x                                                        -
vol1/VM/subvol-200-disk-1  written                     0                                                            -
vol1/VM/subvol-200-disk-1  logicalused                 542M                                                         -
vol1/VM/subvol-200-disk-1  logicalreferenced           542M                                                         -
vol1/VM/subvol-200-disk-1  filesystem_limit            none                                                         default
vol1/VM/subvol-200-disk-1  snapshot_limit              none                                                         default
vol1/VM/subvol-200-disk-1  filesystem_count            none                                                         default
vol1/VM/subvol-200-disk-1  snapshot_count              none                                                         default
vol1/VM/subvol-200-disk-1  snapdev                     hidden                                                       default
vol1/VM/subvol-200-disk-1  acltype                     posixacl                                                     local
vol1/VM/subvol-200-disk-1  context                     none                                                         default
vol1/VM/subvol-200-disk-1  fscontext                   none                                                         default
vol1/VM/subvol-200-disk-1  defcontext                  none                                                         default
vol1/VM/subvol-200-disk-1  rootcontext                 none                                                         default
vol1/VM/subvol-200-disk-1  relatime                    off                                                          default
vol1/VM/subvol-200-disk-1  redundant_metadata          all                                                          default
vol1/VM/subvol-200-disk-1  overlay                     off                                                          default
vol1/VM/subvol-200-disk-1  org.znapzend:pre_znap_cmd   off                                                          inherited from vol1/VM
vol1/VM/subvol-200-disk-1  org.znapzend:tsformat       %Y-%m-%d-%H%M%S                                              inherited from vol1/VM
vol1/VM/subvol-200-disk-1  org.znapzend:recursive      on                                                           inherited from vol1/VM
vol1/VM/subvol-200-disk-1  org.znapzend:mbuffer_size   1G                                                           inherited from vol1/VM
vol1/VM/subvol-200-disk-1  org.znapzend:src_plan       24hours=>1hours,6days=>1days,4weeks=>1weeks,2years=>1months  inherited from vol1/VM
vol1/VM/subvol-200-disk-1  org.znapzend:mbuffer        off                                                          inherited from vol1/VM
vol1/VM/subvol-200-disk-1  org.znapzend:enabled        on                                                           inherited from vol1/VM
vol1/VM/subvol-200-disk-1  org.znapzend:post_znap_cmd  off                                                          inherited from vol1/VM
 
This is not a default configuration. First thing which caught my eye is the fact that you're using a case insensitive subvolume there. This has shown to be problematic in the past (in particular in combination with AppArmor[1]). So maybe try:
Code:
# zfs set casesensitivity=sensitive vol1/VM/subvol-200-disk-1
Then try to start the container.

Now, there are a couple of other options which seem weird, particularly for a container rootfs: sharesmb, normalization, sharenfs, nbmand.
Ideally you'd disable all of those on `vol1/VM` and only enable them on specific subvolumes where you need them.

[1] https://github.com/zfsonlinux/zfs/issues/4763
 
This is not a default configuration. First thing which caught my eye is the fact that you're using a case insensitive subvolume there. This has shown to be problematic in the past (in particular in combination with AppArmor[1]). So maybe try:
Code:
# zfs set casesensitivity=sensitive vol1/VM/subvol-200-disk-1
Then try to start the container.

Now, there are a couple of other options which seem weird, particularly for a container rootfs: sharesmb, normalization, sharenfs, nbmand.
Ideally you'd disable all of those on `vol1/VM` and only enable them on specific subvolumes where you need them.

[1] https://github.com/zfsonlinux/zfs/issues/4763
Ah, thank you for that! I'll test shortly!

Yes, normally I'd edit the sharing options, this is just a test lxc.