Create a large pool of shared ZFS storage?

jonohunt

New Member
Aug 13, 2021
7
1
1
25
I'm not new to Proxmox, but am new to trying out ZFS.

At the moment I have Proxmox installed in a 1TB NVMe. I have a few VMs on there and they each have 50GB - 100GB disks each (enough space for them to run OK).


I also have 7 old-ish 2TB HDs in my PC/server case, and thought about creating a ZFS pool from the HDs and would use this as a large pool of data that the VMs and other computers could share (shared media files, a few backups, random files, etc.).

Is this a good idea? (Creating a large pool of shared storage from the 2TB disks) And if so, how would I create the share from the pool?

I've seen tutorials on creating ZFS pools on Proxmox, but haven't seen how to create a large pool of storage that could be shared with VMs and other computers, maybe via Samba/CIFS?
 
ZFS itself isn't a shareable filesystem but it features sharing of datasets using SMB/NFS if you install a NFS/SMB server. But I would recommend running openmediavault or something like that inside a privileged LXC and then passthrough a datasets mountpoint from the host to that LXC using bind-mounts. That way you got a WebUI for managing network shares and your NFS/SMB server runs inside the LXC so you don't need to setup everything again if you want to reinstall proxmox.

I would use a raidz2 for that especially if the 7 drives are old. But is running 7x 2TB drives 24/7 is useful? If they are normal 7100 RPM 3,5" drives they will use something like 70W. 70W for 8TB of usable storage isnt that great. A 18 TB HDD only needs 10W and got double the capacity. But of cause also not that secure and fast (atleast for sequential IO). So I would say that depends on the price per kWh of your electricity provider. Here that woukd cost around 184€/217$ additional electricity bill per year. For that you could also buy a new 10TB HDD.
 
Last edited:
Thanks a lot, openmediavault sounds like a good way of doing this.

Would I create the ZFS pool from Proxmox, or within openmediavault itself?
 
Thanks a lot, openmediavault sounds like a good way of doing this.

Would I create the ZFS pool from Proxmox, or within openmediavault itself?
ZFS directly on the host. So you dont get additional overhead and can also use the same pool for other stuff like virtual disks for swap that you could add to your VMs to lower the SSDs wear.

After creating the pool you could for example add 2 datasets (zfs create MyPool/VMs and zfs create MyPool/NAS). Then you could add the dataset "MyPool/VMs" as a additional VM/LXC storage and bind-mount the dataset "MyPool/NAS" (mountpoint should be "/MyPool/NAS") into the LXC. Inside the OMV LXC you then could create several folders on that dataset, manage users and rights, create shares and so on.

Make sure to use a privileged LXC and not a unprivileged one because the unprivileged one uses user-remapping and its a pain to manually edit the user-remapping each time you want to change some users on that bind-mount. As long as your OMV is only accessible from LAN and you trust the poeple using it, a privileded LXC will be fine.

And remember to increase the volblocksize of that pool if you also want to use it as a VM storage or you will waste 3TB due to padding overhead if you dont change the default of 8K.
 
Last edited:
  • Like
Reactions: J4ck and jonohunt
ZFS directly on the host. So you dont get additional overhead and can also use the same pool for other stuff like virtual disks for swap that you could add to your VMs to lower the SSDs wear.

After creating the pool you could for example add 2 datasets (zfs create MyPool/VMs and zfs create MyPool/NAS). Then you could add the dataset "MyPool/VMs" as a additional VM/LXC storage and bind-mount the dataset "MyPool/NAS" (mountpoint should be "/MyPool/NAS") into the LXC. Inside the OMV LXC you then could create several folders on that dataset, manage users and rights, create shares and so on.
Right, I'll create the ZFS pool within Proxmox. Then create separate datasets :)

One last question. Looking around for tutorials of openmediavault on Proxmox the ones I've seen are to set up openmediavault as a VM. Do you know of a guide somewhere for setting it up in an LXC container?
 
Right, I'll create the ZFS pool within Proxmox. Then create separate datasets :)

One last question. Looking around for tutorials of openmediavault on Proxmox the ones I've seen are to set up openmediavault as a VM. Do you know of a guide somewhere for setting it up in an LXC container?
You can install OMV ontop of a Debian 10. I just used the basic Debian 10 LXC template and ran the OMV installer script.
 
  • Like
Reactions: jonohunt
@Dunuin …I've set up ZFS, the LXC container, and installed OMV in it, but can't find the mounted ZFS dataset in OMV :rolleyes:

2021-08-13@9.06.12-Cleanshot@2x.png

I've looked at adding it in OMV under Disks, File Systems, Shared Folders, and SMB/CIFS but can't work it out.

Where should I find it / add it in OMV?
 
@Dunuin …I've set up ZFS, the LXC container, and installed OMV in it, but can't find the mounted ZFS dataset in OMV :rolleyes:

View attachment 28684

I've looked at adding it in OMV under Disks, File Systems, Shared Folders, and SMB/CIFS but can't work it out.

Where should I find it / add it in OMV?
I think you understand it wrong. Your picture shows you have created a virtual disk (zvol) and bind-mounted this into the LXC. Thats not good because a zvol is just a block device and you would need to format it inside the OVM LXC with some kind of filesystem like ext4. Then it would look like this and cause needless overhead:
physical disks -> ZFS pool -> zvol -> bind-mount into LXC -> ext4 -> folder

But a dataset is already a filesystem and can directly be used inside the LXC without using any virtual disks at all. Then it would look like this:
physical disks -> ZFS pool -> dataset -> bind-mount into LXC

What I would do:
1.) make sure you created a dataset on that pool to use as your nas storage. For exmaple:
zfs create MyPool/NAS
2.) make sure it is mounted. After its mounted (should be automatically the case after creation) you can use its mountpoint as a normal folder. Just like al lthe other linux folders you got.
3.) delete that virtual disk thats right now mp0
4.) create a folder inside your LXC that you want to use as your target for the bind-mount (for exmaple mkdir /mnt/NAS). Should be owned by root.
4.) bind-mount that dataset folder (your mointpoint) from your host to that target folder inside your LXC. Should look like this (run on host):
pct set <VMID> -mp<freemountpointnr> /my/folder/on/host,mp=/target/folder/inside/lxc or
pct set 120 -mp0 /MyPool/NAS,mp=/mnt/NAS
5.) after that you should be able to read/write to "/mnt/NAS" inside the LXC and that will read/write to your dataset "MyPool/NAS" on your host.
6.) you should then be able to use that folder as a share
 
Last edited:
Hi, did this actually work. I can never get the CT to reboot with OMV. I can install and access the OMV web gui the first go but when I restart there is no access to the OMV anymore.
 
Hi, did this actually work. I can never get the CT to reboot with OMV. I can install and access the OMV web gui the first go but when I restart there is no access to the OMV anymore.
Here with PVE 6.4 and a debian 10 LXC converted to OMV it works fine.
 
Thank you for the reply. I am working with a fresh installation of PVE 7.0. Maybe that is the reason. or since I am a total noob with PVE so I might be doing something wrong. When I set up the LXC do I need to make any changes to the default values?
 
Last edited:
Thank you for the reply. I am working with a fresh installation of PVE 7.0. Maybe that is the reason. or since I am a total noob with PVE so I might be doing something wrong. When I set up the LXC do I need to make any changes to the default values?
Not sure, there were alot of changes between PVE6 and PVE7 concerning LXCs. For example now nesting is enabled by default because OSs like debian 11 now require that operate. And it was switched from cgroup to cgoup2 so some older OSs will just don't work anymore. And it was switched from LXC version 3.X to 4.X.
 
Last edited:
Not sure, there were alot of changes between PVE6 and PVE7 concerning LXCs. For example now nesting is enabled by default because OSs like debian 11 now require that operate. And it was switched from cgroup to cgoup2 so some older OSs will just don't work anymore. And it was switched from LXC version 3.X to 4.X.
Yeah I think it looks like PVE7 is the reason. Even the base debian 10 LXC is giving problems as sometimes the console on the LXC goes blank (ie blinking curser). Hope that it can get fixed.
 
Hi, did this actually work. I can never get the CT to reboot with OMV. I can install and access the OMV web gui the first go but when I restart there is no access to the OMV anymore.
I had problems with OMV on Debian in an LXC (PVE7), I kept getting error messages when saving any changes I made.

I ended up just running OMV in a VM, and it worked fine after that.
 
  • Like
Reactions: majorgear

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!