Rpool, how to create ZFS

bugs

New Member
May 6, 2023
18
1
3
Hello,

I'm still learning proxmox with zfs and some details are not clear to me. I hope someone could help me to understand better than now.

My proxmox is installed with 32 Go Ram and a single 1 To SSD using ZFS.
To learn I installed a few VM and a CasaOS container with a FTP server using docker.

When I look at my node (home) and the disks I can see LVM, LVM-thin... and ZFS. And inside ZFS, my "rpool".

Inside my Casa OS the alone user is "root" and I would like to add a volume for my FTP server on my single disk but outside CasaOS. I guess I have to create something like /rpool/FTPDirectory. But when I try "create ZFS" where I see "rpool", It's impossible because I get the message "no disk available" because almost all the disk size to this pool even if only 10 Go are allocated.

I know how to create a new ZFS storage in my datacenter and how to mount it in my CasaOS. But my FTP user is not my CasaOS root and this is why it's impossible to write on it. In more, I don't want (but may be I'm wrong) my FTP folder inside my CasaOS files.

This is why I would like a separate volume for my FTP server. I thought I have to change something on my rpool to let me to use "create ZFS" here. Add a subfolder like /rpool/Ftp or something else ? Because when I create a new volume from Datacenter => Storage => moint, the result is not what I'm looking for.

Could you tell me what to do and how in this case ?

Thank you.
 
I am not quite sure how you ended up with both LVM and ZFS. Usually, I would expect one or the other.

In any case, ZFS is both a filesystem, a disk, and a volume manager. So, you usually would not create another ZFS pool in the GUI, unless you wanted to add more hardware and manage the newly added disks as a separate entity. Think of the pool as a collection of all your drives that gives you the ability to dole out storage as needed.

If all your disks are assigned to /rpool, you can create a new volume (i.e. a virtual block device) or a new ZFS filesystem that is taken from this pool as needed.

What you want to do is reasonable, and you'll find plenty of discussions in this sub where people do that. But it also is something that doesn't quite match with the philosophy of Proxmox. Think about it this way, if you create storage outside of your container or virtual machine, it will only be accessible to your virtualized environment as long as that OS stays on the same physical machine. But Proxmox is all about setting up redundant and highly available clusters that migrate containers and VM between physical nodes as needed.

If you do what you are describing, you won't be able to migrate your CasaOS container. As long as you fully understand this limitation, you can continue with your experiment. Just don't expect Proxmox to expose the tools to do so in the UI. It's protecting you from doing something that partially breaks some of Proxmox's design. You'll have to make these changes from the system shell.

In the shell, you can run a command such as "zfs create -o compression=zstd -o xattr=sa -o recordsize=32k -o relatime=on rpool/ftpdir".

You now have a new directory at rpool/ftpdir that is managed by ZFS. You'll see it, if you type "zfs list". But the Proxmox UI is pretty much completely unaware of it.

But you can now make this directory available inside your container with the "pct set" command to add a new mountpoint. You should read up on how to do that, as there are a bunch of subtleties. Besides migration, you also lose the ability to take snapshots. This can be avoided, if you manually edited the container's configuration file and instead of Proxmox's native bind mounts, you can add "lxc.mount.entry" lines. Again, this comes with pros and cons and breaks Proxmox's design philosophy. Plenty of people do so, but it is something where you need to understand all the implications.

Finally, you should read up on user id mappings with unprivileged containers. This can come as a surprise, if you plan to access your FTP data directory from both inside the container and from the host. It's easy enough to fix, assuming you can make up your mind how you want to change it. There are a few different options.

But I think this should give you enough data to start searching for previous discussions on this topic.