Containers on GlusterFS?

t3chnode

New Member
Feb 24, 2021
3
0
1
25
I'm currently in the process of setting up Gluster FS across our three existing Proxmox nodes, but during testing I ran into the issue of not being able to move containers to Gluster. Is there any stable solution to host containers on Gluster? I don't have to rebuilt the containers as VMs.

For reference, each hosts has/will have
2 X Zfs Mirrored SSD's for OS
5 x 3TB HDD's in Raidz2 (plus an SSD for read/write caching)
 
You would need to create a directory storage on top, but that comes with its own set of things to look out for.

With a 3 node cluster, why not use Ceph?
 
From what I understand of Ceph there is a little bit of a learning curve, and as it stands we have another GlusterFS pool already in use that is being deprecated and migrated. The IT manager would prefer the use of Gluster over Ceph as both of us are more familiar with Gluster. Is there a good guide to setting up directory storage on gluster somewhere? Also what kinds of things should we be looking out for?
 
For Proxmox LXC container storage on glusterfs, I use NFS-Ganesha and configure the storage in Proxmox as NFS. I installed NFS-Ganesha on the gluster servers holding the volumes with the LXC disk images. It works very well.
 
  • Like
Reactions: t3chnode
For Proxmox LXC container storage on glusterfs, I use NFS-Ganesha and configure the storage in Proxmox as NFS. I installed NFS-Ganesha on the gluster servers holding the volumes with the LXC disk images. It works very well.
Using this setup can you use Live Migration of containers? Could you maybe describe your setup a little bit more?
 
Using this setup can you use Live Migration of containers?

Live? No. There is no live migration of LXC containers with Proxmox, on any storage. Any LXC container migration requires the container to shutdown and restart.

However, by putting the disk images on NFS/Gluster, that shutdown/migrate/restart only takes 5 seconds or so, even on my slow/weak systems.

Could you maybe describe your setup a little bit more?

This is a cheap bare bones home lab setup. I'm running a pair of Proxmox PVE v5 machines as a 2-node cluster, on two old Intel boxes. I have 2 more disk servers on ARM SBC's (ODROID XU4 with USB-SATA adapters, running Armbian Ubuntu 20.04). All 4 machines are running Gluster 7, and NFS-Ganesha with nfs-ganesha-gluster to provide NFS access to the gluster volumes. Gluster works just fine in a mixed CPU architecture (x86/64 and ARM) cluster.

Each node is cheap and easily replaceable. All mirroring is done by Gluster; no local disk mirror/RAID used on any machine. Each system has 1 or 2 SSD's and 1 or 2 mechanical hard drives.

I use a DNS round-robin name for Gluster and NFS usage which includes all 4 server IP's. This is better than Proxmox's primary/secondary config entries for two reasons: (1) it lets you specify more than 2 servers available to answer requests, and (2) hosts will always prefer themself as part of the RR group, eliminating network traffic when the gluster file they want is on the same machine.

I mirror my gluster volume for the disk images across the two PVE servers, so one half of the mirror access is always on the local machine, cutting network traffic in half of what it might otherwise be.

As a result, when Proxmox/LXC is accessing the container's disk image file on NFS, the Proxmox server is talking to NFS-Ganesha which then talks to Gluster, all on the same box. For reads, everything stays within the server, no network access required. For writes, half stays local and the other mirror half writes to the other gluster/PVE server.

Larger gluster volumes for general file storage/sharing are spread across all 4 servers. Since I have more than 2 servers, I configure my gluster volumes as replicate 2+1 (simple mirror plus arbiter) plus distribute. It works great.
 
Last edited:
  • Like
Reactions: t3chnode
Live? No. There is no live migration of LXC containers with Proxmox, on any storage. Any LXC container migration requires the container to shutdown and restart.

However, by putting the disk images on NFS/Gluster, that shutdown/migrate/restart only takes 5 seconds or so, even on my slow/weak systems.
Sorry ... my fault ;)
I use a DNS round-robin name for Gluster and NFS usage which includes all 4 server IP's. This is better than Proxmox's primary/secondary config entries for two reasons: (1) it lets you specify more than 2 servers available to answer requests, and (2) hosts will always prefer themself as part of the RR group, eliminating network traffic when the gluster file they want is on the same machine.
Thats the interesting point. Thanks for your explanation. I'll try it the next days on my own!
 
Thanks for all the comments, I'm currently still using the directory storage on gluster but am looking forward to testing out nfs on top of gluster. This is my first time posting to the forum and you've all been very helpful. Much appreciated.
 
You would need to create a directory storage on top, but that comes with its own set of things to look out for.

With a 3 node cluster, why not use Ceph?
what do you mean with directory storage?
 
what do you mean with directory storage?
A storage of the type "directory" can be configured at a path of your choosing. Therefore, you can mount whatever file system first, even if it is not natively supported by Proxmox VE. In such a case, just make sure to point the path to the mountpoint and set the is_mountpoint 1"option for that storage so that Proxmox VE knows to only activate this storage is there is something mounted at this path.
 
A storage of the type "directory" can be configured at a path of your choosing. Therefore, you can mount whatever file system first, even if it is not natively supported by Proxmox VE. In such a case, just make sure to point the path to the mountpoint and set the is_mountpoint 1"option for that storage so that Proxmox VE knows to only activate this storage is there is something mounted at this path.
your answer based of which part of the doku???

1.Storage Backed Mount Points
  • Directories: passing size=0 triggers a special case where instead of a raw image a directory is created
or

2.Bind Mount Points
 
For Proxmox LXC container storage on glusterfs, I use NFS-Ganesha and configure the storage in Proxmox as NFS. I installed NFS-Ganesha on the gluster servers holding the volumes with the LXC disk images. It works very well.
sag mal, in der doku steht, dass das backup von containern schneller geht, wenn diese auf storage mit snapshot liegen. Wie sind deine Erfahrungen mit NFS?
 
Storages: https://pve.proxmox.com/pve-docs/chapter-pvesm.html#storage_directory
In such a storage, the container disks will be stored as .raw files.

Unless you set the size=0. But you should avoid this as this is still possible for backward compatibility, but not tested as well and some newer features might not work with such container volumes.

You seem to be referring to containers?
then you mean the first point of

Storage Backed Mount Points​


  • Image based: these are raw images containing a single ext4 formatted file system.

I think you mean to mount the glusterfs in host-filesystem and then create a dir mount als Storage entry and use it for the container mount, which would create the raw file. Right?

I wrote an thread where i suggest to install PBS as VM and the Backup Storage on an GlusterFS-Cluster. Now, I looking for a solution to implement that with container. Meanwhile, i noticed that Proxmox PVE with GlusterFS is difficult for Container, because proxmox do not force to integrated feature like snapshot and cloning for glusterfs. Backups of container therefor are very slow. I hope to find a workaround to use container with glusterfs and alle the good feature of proxmox.
 
Last edited:
Also, I see that container is only used functional on clustered storage with ceph. This is awfull. Unfortunately, ceph dont support deduplication.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!