Hello all,
I wanted to bounce an idea off you and wonder what people think about the feasibility and/or possible issues.
TL;DR: Trying to get containers to move to other hosts but still access files using a Mount Point from a ZFS store on the host they came from. At the same time, requiring no changes to the container config itself.
Background:
3 Hosts, 1 Host (Store Host) is primary storage has all the hard drives, 2 (CrunchHosts) hosts are smaller and cheaper for CPU count and memory needs
I have three containers to process, sync, and display my media from my photography and videography running on my big host. All 3 containers have a mount point to a folder in a ZFS share. The current central setup works, permissions work, files have been moved between the containers, users are LDAP.
All 3 Hosts have the same name for their respective ZFS setup. So in the Store Host, it is /<ZFS Storename>/<Media>/<All the folders for the 3 containers>. The other hosts /<ZFS Storename>/ is there, but I have not created the subfolders.
In the containers they are configured as MP1:/<ZFS Storename>/<Media>/,/<shared>/<Media> or something like that.
My goal is to migrate the containers between my Hosts and have them be able to access the same files without any changes to their configs. Since containers do not like mounting NFS shares for security and other reasons, I don't want to fight with that setup. Also, I like the idea of simple management with only a few places needing the NFS mounts so anything can move around. (Dont worry about backups, I have an offsite NAS that will handle that)
I considered doing the NFS Share Mount inside each of the smaller CrunchHosts in the actual Prox host.
So I would mount it to the same /<ZFS Storename>/<Media>/ path and that would point to an NFS running on my Store host. So if the container is moved to one of the CrunchHosts, they would still be bound to the same directory, the permissions would be the same. Everything would look the same (minus some performance hits) and it would be seamless.
Also, do not worry about actual V.M.s those will mount directly to the NFS. Also, the network is 10Gb, so speeds should handle this.
Is this right? Is there a better way to do this? The CrunchHosts ZFS sit on a 1TB NVME, so any tips on NFS caching would be appreciated (links are okay as there is a common topic.)
Thank you!
P.S. I did try to research this beforehand and could not find anything. I did consider CEPH, but since the crunch hosts only have one disk that could be provided to CEPH and I only have 3 hosts, I ruled out Ceph being stable enough to trust it. I also posted to reddit for help there too.
I wanted to bounce an idea off you and wonder what people think about the feasibility and/or possible issues.
TL;DR: Trying to get containers to move to other hosts but still access files using a Mount Point from a ZFS store on the host they came from. At the same time, requiring no changes to the container config itself.
Background:
3 Hosts, 1 Host (Store Host) is primary storage has all the hard drives, 2 (CrunchHosts) hosts are smaller and cheaper for CPU count and memory needs
I have three containers to process, sync, and display my media from my photography and videography running on my big host. All 3 containers have a mount point to a folder in a ZFS share. The current central setup works, permissions work, files have been moved between the containers, users are LDAP.
All 3 Hosts have the same name for their respective ZFS setup. So in the Store Host, it is /<ZFS Storename>/<Media>/<All the folders for the 3 containers>. The other hosts /<ZFS Storename>/ is there, but I have not created the subfolders.
In the containers they are configured as MP1:/<ZFS Storename>/<Media>/,/<shared>/<Media> or something like that.
My goal is to migrate the containers between my Hosts and have them be able to access the same files without any changes to their configs. Since containers do not like mounting NFS shares for security and other reasons, I don't want to fight with that setup. Also, I like the idea of simple management with only a few places needing the NFS mounts so anything can move around. (Dont worry about backups, I have an offsite NAS that will handle that)
I considered doing the NFS Share Mount inside each of the smaller CrunchHosts in the actual Prox host.
So I would mount it to the same /<ZFS Storename>/<Media>/ path and that would point to an NFS running on my Store host. So if the container is moved to one of the CrunchHosts, they would still be bound to the same directory, the permissions would be the same. Everything would look the same (minus some performance hits) and it would be seamless.
Also, do not worry about actual V.M.s those will mount directly to the NFS. Also, the network is 10Gb, so speeds should handle this.
Is this right? Is there a better way to do this? The CrunchHosts ZFS sit on a 1TB NVME, so any tips on NFS caching would be appreciated (links are okay as there is a common topic.)
Thank you!
P.S. I did try to research this beforehand and could not find anything. I did consider CEPH, but since the crunch hosts only have one disk that could be provided to CEPH and I only have 3 hosts, I ruled out Ceph being stable enough to trust it. I also posted to reddit for help there too.
Last edited: