Can't create LXC CT on ZFS

neek

Member
Oct 22, 2017
13
0
6
54
I'm running Proxmox 5.2.10. I'm a proxmox n00b so it's possible I've done something wrong.

When I attempt to create a container and use either Directory (atop a ZFS volume) or ZFSpool, creation of the container fails.

For ZFS pool, I created it as in the first pic and get the error from the 2nd pic
Screen Shot 2018-10-27 at 9.45.22 PM.png
the error:
Screen Shot 2018-10-27 at 9.47.27 PM.png

On a ZFS-backed directory, I get an error that the system can't create a raw file, though I'd have thought it should be of type subvol. Is this a bug in the GUI? Am I doing something wrong?

I am able to create containers on my lvmthin storage but I don't have enough space there.

Suggestions very welcome!
 
Under Storage, you can define what to be stored on a storage.
Did you enable containers on your zfs volume?
cu peje
 
Yes I did. If you don't enable containers to be stored on that volume type, I believe Proxmox won't show it as an available option for where to write the container.

thanks
 
I'm trying to debug via the command line and I get roughly the same behavior:
Code:
root@pve1:~# pct create 115 pvestore:vztmpl/ubuntu-18.04-standard_18.04-1_amd64.tar.gz --description "File server" --hostname filer --memory 1024 --storage pvestore --ostype ubuntu
Formatting '/mnt/vol1/pvestore/images/115/vm-115-disk-0.raw', fmt=raw size=4294967296
mke2fs 1.43.4 (31-Jan-2017)
Discarding device blocks: done
Creating filesystem with 1048576 4k blocks and 262144 inodes
Filesystem UUID: 20c810a2-45e2-4464-8aa4-60bd73307948
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information:
Warning, had trouble writing out superblocks.
command 'mkfs.ext4 -O mmp -E 'root_owner=0:0' /mnt/vol1/pvestore/images/115/vm-115-disk-0.raw' failed: exit code 144
 
Your 'pvestore' is a directoty type storage. Please use the zpool based storage 'pvez' instead.
 
Is creation of a container in a directory unsupported?

As mentioned up top, I also tried on a zpool and it also fails, but with a different error:

Code:
root@pve1:~# pct create 115 pvestore:vztmpl/ubuntu-18.04-standard_18.04-1_amd64.tar.gz --description "File server" --hostname filer --memory 1024 --storage pvez --ostype ubuntu
mounting container failed
cannot open directory //vol1: No such file or directory

"vol1" is the name of the zpool on my system. I have it mounted at /mnt/vol1, and the ZFS storage type mounted as /mnt/vol1/pvez.

thanks again
 
"vol1" is the name of the zpool on my system. I have it mounted at /mnt/vol1, and the ZFS storage type mounted as /mnt/vol1/pvez.

You need to use the standard mount point instead. Mounting at different locations is not supported.
 
Is creation of a container in a directory unsupported?
It is possible to mount the zfs device on a directory and use that directory as storage. And then enable containers and templates as "content" for that storage.

Are you using mirroring or RAIDZ?
 
It is possible to mount the zfs device on a directory and use that directory as storage. And then enable containers and templates as "content" for that storage.

Are you using mirroring or RAIDZ?

I am using RAIDZ2. I do have containers enabled on the directory, but as mentioned at the top of the thread, I get an error when I attempt that. From dietmar's replies above, it seems directory storage doesn't work, and I need to remount my ZFS pool at a different location (presumably /vol1) but I'll have to experiment to figure that out.

I assume both of these behaviors (directories not working and zfs volumes needing to be mounted at /) are bugs. I will report them as such since they are not documented to work that way.

thanks all!
 
I assume both of these behaviors (directories not working and zfs volumes needing to be mounted at /) are bugs. I will report them as such since they are not documented to work that way.

No, they are not bugs. Directory on ZFS does not work due to missing O_DIRECT support in ZFS itself. It does not make sense at all to use a filesystem on a filesystem with LXC, if you want such a behaviour, just go with a "real" VM.

You have some strange problem inside of PVE, because what you posted in your screenshots should work. Please update to the latest version and try again. It if still fails, please provide your versions via pveversion -v
 
No, they are not bugs. Directory on ZFS does not work due to missing O_DIRECT support in ZFS itself. It does not make sense at all to use a filesystem on a filesystem with LXC, if you want such a behaviour, just go with a "real" VM.

Directory on ZFS works very smoothly in Proxmox v3.4 containers w/ a combination of ZVols + Ext4 + OpenVZ. Haven't tried it w/ LXC as yet.

I was a complete ZFS noobie this time last year. Now, I can't get enought of it.:D
 
Directory on ZFS works very smoothly in Proxmox v3.4 containers w/ a combination of ZVols + Ext4 + OpenVZ.

The question still remains: Why would you want to use a filesystem on top of another filesystem if you get everything running without the additional filesystem layer?
 
The question still remains: Why would you want to use a filesystem on top of another filesystem if you get everything running without the additional filesystem layer?

ZFS : Convenience, speed, data integrity, compression (a 2TB physical disk transforming itself into 3TB+ like magic), ARC read cache, ZIL write cache, fast delta replications, z-sync, simplicity, easy admin, error corrections, fast mirroring, thin-provisioning, snapshots, cloning,........
 
Last edited:
ZFS : Convenience, speed, data integrity, compression (a 2TB physical disk transforming itself into 3TB+ like magic), ARC read cache, ZIL write cache, fast delta replications, z-sync, simplicity, easy admin, error corrections, fast mirroring, thin-provisioning, snapshots, cloning,........
Agreed with all of this. I also find it much safer to see a regular file system with the container's files. ZFS is the ultimate file system by most experts (myself included). Faking file systems in binary blobs works well for some things but for containers, which are really running atop a standard Linux kernel that is by definition the same as the container's kernel, seems unnecessary.

No, they are not bugs. Directory on ZFS does not work due to missing O_DIRECT support in ZFS itself. It does not make sense at all to use a filesystem on a filesystem with LXC, if you want such a behaviour, just go with a "real" VM.
On FreeNAS/FreeBSD, the files are simply a set of files with chroot set to some subdirectory, so from the host, you can cd /path/to/jail and see the root file system. It's fine that that's not how it's implemented in Linux, I just expected it to behave like it does on FreeBSD.

You have some strange problem inside of PVE, because what you posted in your screenshots should work. Please update to the latest version and try again. It if still fails, please provide your versions via pveversion -v

The container works on a ZFS pool if and only if the ZFS pool is mounted at / (e.g. my "vol1" is mounted at /vol1). According to dietmar earlier on this thread, ZFS pools don't work if you mount them elsewhere (I had been using /mnt/vol1). Should be an easy thing to fix in the perl scripts but I'll file a ticket about that some time soon.

Using a directory works unless the directory is on a ZFS file system. Apparently zfsonlinux did not support O_DIRECT, which is needed for containers, until a few weeks ago (see https://github.com/zfsonlinux/zfs/pull/7823). I assume that commit will eventually make its way into proxmox and then I will re-try using a ZFS-based directory.
 
ZFS : Convenience, speed, data integrity, compression (a 2TB physical disk transforming itself into 3TB+ like magic), ARC read cache, ZIL write cache, fast delta replications, z-sync, simplicity, easy admin, error corrections, fast mirroring, thin-provisioning, snapshots, cloning,........

This is completely true but how does this relate to my question? I'm not questioning the use of ZFS, I'm questioning the use of ext4 on a raw file on ZFS.
 
On FreeNAS/FreeBSD, the files are simply a set of files with chroot set to some subdirectory, so from the host, you can cd /path/to/jail and see the root file system. It's fine that that's not how it's implemented in Linux, I just expected it to behave like it does on FreeBSD.

But it is ... please review how ZFS on Linux works - this is exactly how PVE uses containers on ZFS with datasets for each container. The problem you describe with O_DIRECT does only apply to the use of PVE storage type directory on a ZFS backend, on which you then use QCOW2 or RAW in which the mkfs will fail.

The container works on a ZFS pool if and only if the ZFS pool is mounted at / (e.g. my "vol1" is mounted at /vol1). According to dietmar earlier on this thread, ZFS pools don't work if you mount them elsewhere (I had been using /mnt/vol1).

No, that's not what he said, please read again. His answer was a reply to your non-default mount points.

If you install PVE with ZFS, everything works out of the box with the rpool. Your pool vol1 was created by you so the problem you described in your first post is most probably a self-made one. Please install PVE inside of another hypervisor with ZFS an play around with it to see what @dietmar and I meant.
 
O_DIRECT does only apply to the use of PVE storage type directory on a ZFS backend, on which you then use QCOW2 or RAW in which the mkfs will fail.

Small correction: There are some application that need O_DIRECT nonetheless, but most container applications don't.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!