Container Mobility with Mount Points and NFS

ritkit

New Member
Sep 17, 2021
1
0
1
33
Hello all,

I wanted to bounce an idea off you and wonder what people think about the feasibility and/or possible issues.

TL;DR: Trying to get containers to move to other hosts but still access files using a Mount Point from a ZFS store on the host they came from. At the same time, requiring no changes to the container config itself.

Background:
3 Hosts, 1 Host (Store Host) is primary storage has all the hard drives, 2 (CrunchHosts) hosts are smaller and cheaper for CPU count and memory needs

I have three containers to process, sync, and display my media from my photography and videography running on my big host. All 3 containers have a mount point to a folder in a ZFS share. The current central setup works, permissions work, files have been moved between the containers, users are LDAP.

All 3 Hosts have the same name for their respective ZFS setup. So in the Store Host, it is /<ZFS Storename>/<Media>/<All the folders for the 3 containers>. The other hosts /<ZFS Storename>/ is there, but I have not created the subfolders.

In the containers they are configured as MP1:/<ZFS Storename>/<Media>/,/<shared>/<Media> or something like that.

My goal is to migrate the containers between my Hosts and have them be able to access the same files without any changes to their configs. Since containers do not like mounting NFS shares for security and other reasons, I don't want to fight with that setup. Also, I like the idea of simple management with only a few places needing the NFS mounts so anything can move around. (Dont worry about backups, I have an offsite NAS that will handle that)

I considered doing the NFS Share Mount inside each of the smaller CrunchHosts in the actual Prox host.

So I would mount it to the same /<ZFS Storename>/<Media>/ path and that would point to an NFS running on my Store host. So if the container is moved to one of the CrunchHosts, they would still be bound to the same directory, the permissions would be the same. Everything would look the same (minus some performance hits) and it would be seamless.

Also, do not worry about actual V.M.s those will mount directly to the NFS. Also, the network is 10Gb, so speeds should handle this.

Is this right? Is there a better way to do this? The CrunchHosts ZFS sit on a 1TB NVME, so any tips on NFS caching would be appreciated (links are okay as there is a common topic.)

Thank you!

P.S. I did try to research this beforehand and could not find anything. I did consider CEPH, but since the crunch hosts only have one disk that could be provided to CEPH and I only have 3 hosts, I ruled out Ceph being stable enough to trust it. I also posted to reddit for help there too.
 
Last edited:
Hey :)

I found your solution not unreliable, but very hard for 3 containers :O

If i 've enderstood your wish, you want:
2 nodes for CPU processing
1 node for store shared datas for processing between yours containers.

If it's that, i can suppose that yours root virtuals disks of yours containers are small, and you work in shared mountpoint.

Why do not activate the replication between yours 2 CPU nodes, that allow you to switch between hard configuration with news questions on your cache for NFS and simply replicate via the proxmox fonctionnality between them?

This choice give you more flexibility for your Data host ( and allow to use dedicated storage OS for bests performances (openmediavault, etc...)

(sorry for my small english)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!