[TUTORIAL] Create a Proxmox 6.2 cluster GlusterFS storage

tuathan

Member
May 23, 2020
52
6
8
See references [1] and [2]

Do the following on all nodes unless stated.

(a) Prepare disks

(Assumes the disk is /dev/sdd)

Code:
fsidk -l /dev/sdd
pvcreate /dev/sdd
vgcreate vg_proxmox /dev/sdd
lvcreate --name lv_proxmox -l 100%vg vg_proxmox
mkfs -t xfs -f -i size=512 -n size=8192 -L PROXMOX /dev/vg_proxmox/lv_proxmox
mkdir -p /data/proxmox
nano /etc/fstab
Append the following line:
/dev/mapper/vg_proxmox-lv_proxmox /data/proxmox xfs defaults 0 0
Code:
mount -a
mount | grep proxmox

(b) Install GlusterFS

Code:
apt-get install glusterfs-server glusterfs-client
reboot now
systemctl start glusterd

check connection to other gluster server nodes:

Code:
gluster peer probe 10.X.X.X

(c) Create and start the Gluster volume on a single nodes replicating over 2 nodes (we'll add the 3rd separately below just to show that process of adding extra nodes)

Code:
gluster volume create gfs-volume-proxmox transport tcp replica 2 10.X.X.X:/data/proxmox 10.X.X.X:/data/proxmox force
volume create: gfs-volume-proxmox: success: please start the volume to access data

Code:
gluster volume start gfs-volume-proxmox
volume start: gfs-volume-proxmox: success

Code:
gluster volume info
Volume Name: gfs-volume-proxmox
Type: Replicate
Volume ID: cfc9bff3-c80d-4ac6-8d6d-e4cd47fbfd8e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.140.79.120:/data/proxmox
Brick2: 10.140.79.123:/data/proxmox
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

(d) To add more Gluster bricks

On a single PVE nodes do the following (See references [3]):

Code:
gluster volume add-brick gfs-volume-proxmox replica 3 10.X.X.X:/data/proxmox force

(e) Add Gluster storage to Proxmox

In Proxmox web interface select "Datacenter" > "Storage" > "Add" > "GlusterFS"

Enter an ID and two of the three server IP addresses (i.e from above)

Enter the volume name from above (i.e. "gfs-volume-proxmox")

Select the Content.

(e) To configure Debian-based systems to automatically start the glusterd service every time the system boots, enter the following from the command line:

# systemctl enable glusterd

# echo "glusterd" >> /etc/rc.local

(Above doesn't seem to work)

Potential solution:

"I found this post with the solution:


  • Execute systemctl enable NetworkManager-wait-online
  • Add the following to /lib/systemd/system/crond.service under [Unit]:
    Requires=network.target
    After=syslog.target auditd.service systemd-user-sessions.service time-sync.target network.target mysqld.service

This will allow glusterd to be started after the network has come up."





References:

[1] https://forum.proxmox.com/threads/h...d-setup-replica-storage-with-glusterfs.31695/

[2] https://icicimov.github.io/blog/vir...storage-to-Proxmox-to-support-Live-Migration/

[3] https://www.cyberciti.biz/faq/howto-add-new-brick-to-existing-glusterfs-replicated-volume/
 
Last edited:
  • Like
Reactions: AlexLup
Thank you for the tutorial! You can make code sections using [code]here is your code[/code]. The following resources from Gluster Documentation might also be interesting:

In case you're interested: When I tested Gluster last time, I ran something that is relatively similar to what you did. pveA, pveB... are nodes which got an ip address in /etc/hosts but using ip addresses alone might avoid some trouble.

Code:
# Requires names set in /etc/hosts

device=/dev/sdb
debian_version=buster

wget -O - https://download.gluster.org/pub/gluster/glusterfs/7/rsa.pub | apt-key add -
echo deb [arch=amd64] https://download.gluster.org/pub/gluster/glusterfs/7/LATEST/Debian/$debian_version/amd64/apt $debian_version main > /etc/apt/sources.list.d/gluster.list 
apt-get update
apt full-upgrade -y
apt install glusterfs-server
systemctl enable glusterd
systemctl start glusterd

mkfs.xfs -i size=512 $device
mkdir -p /data/brick1
echo "$device /data/brick1 xfs defaults 1 2" >> /etc/fstab
mount -a && mount
mkdir -p /data/brick1/gv0

# required probing depends on node
gluster peer probe pveA
gluster peer probe pveB

gluster volume create gv0 replica 5 pveA:/data/brick1/gv0 pveB:/data/brick1/gv0 pveC:/data/brick1/gv0 pveD:/data/brick1/gv0 pveE:/data/brick1/gv0
gluster volume start gv0
Then you can add it via the GUI of Proxmox VE.
 
@Dominic:
PVE 6.2 uses glusterfs client in version 5.5-3, based on Debian Buster: https://packages.debian.org/buster/glusterfs-server
In your post, you're using v7.x https://download.gluster.org/pub/gluster/glusterfs/7/LATEST

So the server runs v7.x and the clients v 5.5-3. Can you give some version compatibility notes please.
This would be very useful for all, that wants connect from PVE to the glusterfs server.

I have running GlusterFS with 3 node but can only select 2 server from Proxmox GUI.
Can I add server3 in /etc/pve/storage.cfg like:

Code:
glusterfs: gluster1

        path /mnt/pve/gluster1

        volume volume1

        content none

        maxfiles 1

        server 192.168.110.221

        server2 192.168.110.222

        server3 192.168.110.223


thanks, Frank
 
Last edited:
I have running GlusterFS with 3 node but can only select 2 server from Proxmox GUI.
There is an open feature request to have more than 2 servers. Currently, server (that is server1) and server2 are hardcoded. With your idea I get the following
Code:
➜  ~ qm create 124 --scsi0 gluster:10
file /etc/pve/storage.cfg line 55 (section 'gluster') - unable to parse value of 'server3': unexpected property 'server3'

So the server runs v7.x and the clients v 5.5-3. Can you give some version compatibility notes please.
When I tried this on PVE 6.2 the goal was to test (= basic VM creation, migration, backup...) the Debian versions and the versions from download.gluster.org.
If I remember correctly the following combinations were working:
  1. Client 5 (Debian) and Server 5 (Debian)
  2. Client 5 (Debian) and Server 7 (gluster.org)
  3. Client 7 (gluster.org) and Server 7 (gluster.org)

But this is just out of my head. I would have to re-test it to be really sure (especially the 2nd combination).
 
Thanks Dominic, will ask RedHat about compatibility. On worst case, I do install proxmox on der Gluster nodes and install the same version which Debian buster rolls out at the moment in the repo. So I can be safe for having the same versions connected.
 
If you want to be 100% sure right now you can do something like the following
  1. Create a virtual PVE cluster
  2. Create a snapshot of each PVE VM
  3. Install Gluster from Debian repos on each node (hyper-converged setup)
  4. Restore snapshot
  5. Install Gluster with a newer version from gluster.org repositories
or replace 4 by another snapshot and upgrade in 5
 
See references [1] and [2]

Do the following on all nodes unless stated.

(a) Prepare disks

(Assumes the disk is /dev/sdd)

Code:
fsidk -l /dev/sdd
pvcreate /dev/sdd
vgcreate vg_proxmox /dev/sdd
lvcreate --name lv_proxmox -l 100%vg vg_proxmox
mkfs -t xfs -f -i size=512 -n size=8192 -L PROXMOX /dev/vg_proxmox/lv_proxmox
mkdir -p /data/proxmox
nano /etc/fstab
Append the following line:
/dev/mapper/vg_proxmox-lv_proxmox /data/proxmox xfs defaults 0 0
Code:
mount -a
mount | grep proxmox
See references [1] and [2]

Do the following on all nodes unless stated.

(a) Prepare disks

(Assumes the disk is /dev/sdd)

Code:
fsidk -l /dev/sdd
pvcreate /dev/sdd
vgcreate vg_proxmox /dev/sdd
lvcreate --name lv_proxmox -l 100%vg vg_proxmox
mkfs -t xfs -f -i size=512 -n size=8192 -L PROXMOX /dev/vg_proxmox/lv_proxmox
mkdir -p /data/proxmox
nano /etc/fstab
Append the following line:
/dev/mapper/vg_proxmox-lv_proxmox /data/proxmox xfs defaults 0 0
Code:
mount -a
mount | grep proxmox

Thanks for the tutorial.
Thanks for the tutorial!
I think the
Code:
fsidk -l
should be
Code:
fdisk -l
...

Also, it would be good to update this initial brick creation step so that the brick is based on a thinly provisioned logical volume rather than thick provisioned logical volume that would be created above - The added bonus is that thinly created logical volumes support gluster snapshots - https://docs.gluster.org/en/latest/Administrator-Guide/formatting-and-mounting-bricks/
 
Hi and thanks for tutorial.
I created a gclusterfs with 2 nodes and works fine for the moment.
I have to question:

1) What would happen if one of the two servers fault?
2) I have a third proxmox node for quorum, can I use it like a gcluster arbiter without reinitialize filesystem?

Thanks.
 
1) you're screwed. --> split brain, and screwed x10 if you have a "raid0" gluster.
2) if you have a "raid1" gluster, use the third server for the quorum with arbiter bricks.

Something like that

Code:
after adding the third node

gluster volume add-brick Name_of_the_cluster replica 3 arbiter 1 srv1:/directory1 srv2:/directory2 srv3:/directory3_for_the_arbiter
 
Last edited:
1) you're screwed. --> split brain, and screwed x10 if you have a "raid0" gluster.
2) if you have a "raid1" gluster, use the third server for the quorum with arbiter bricks.

Something like that

Code:
after adding the third node

gluster volume add-brick Name_of_the_cluster replica 3 arbiter 1 srv1:/directory1 srv2:/directory2 srv3:/directory3_for_the_arbiter

Not good :)

I have raid1 volume on node1 and raid1 volume in node2.

Can I add a brick and arbiter without loss volume/data?

If I understand the steps are:

1) add new brick with:
gluster volume add-brick gfs-volume-proxmox replica 3 10.X.X.X:/day day 7y yta/proxmox force

2) modify brick 3 to arbiter with:
gluster volume add-brick gfs-volume-proxmox replica 3 arbiter 1 srv1:/directory1 srv2:/directory2 srv3:/directory3_for_the_arbiter

Or directly step 2?

I found this tutorial on red hat documentation, similiar but different commands:
Bash:
gluster volume add-brick VOLNAME replica 3 arbiter 1 HOST:arbiter-brick-path
 
Last edited:
1) you're screwed. --> split brain, and screwed x10 if you have a "raid0" gluster.
2) if you have a "raid1" gluster, use the third server for the quorum with arbiter bricks.

Something like that

Code:
after adding the third node

gluster volume add-brick Name_of_the_cluster replica 3 arbiter 1 srv1:/directory1 srv2:/directory2 srv3:/directory3_for_the_arbiter
Hi man.
I don't understand if is possible add third node arbiter on an existing 2 node configuration without loss data.
 
  • Like
Reactions: Dark26

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!