[TUTORIAL] Condition VM start on GlusterFS reachability

And when someone writes their public IP, the least you can do is warn them.
You haven't understood what I meant at all. If you look in his post, you will see that there is just an extra 1 (typo mistake) before that IP address.
So instead of:
10.10.5.241, 10.10.5.242,110.10.5.243
It should read:
Code:
10.10.5.241, 10.10.5.242,10.10.5.243

It's plain obvious from the sequence of addresses!


I assumed (wrongly) that you also understood this, but were being sarcastic with him.

Now that extra "1" has caused 4 posts!
 
You haven't understood what I meant at all. If you look in his post, you will see that there is just an extra 1 (typo mistake) before that IP address.

He is worried about that one Korean home broadband user that practices security through obscurity and now got busted. Sorry, I could not help it. ;)
 
Last edited:
  • Like
Reactions: Dark26
He is worried about that one Korean home broadband user that practices security through obscurity and now got busted. Sorry, I could not help it. ;)
Koreans are not the most dangerous... nor the most sarcastic... :p
 
You haven't understood what I meant at all. If you look in his post, you will see that there is just an extra 1 (typo mistake) before that IP address.
So instead of:

It should read:
Code:
10.10.5.241, 10.10.5.242,10.10.5.243

It's plain obvious from the sequence of addresses!


I assumed (wrongly) that you also understood this, but were being sarcastic with him.

Now that extra "1" has caused 4 posts!
We can also guess 110.10.5.241, 110.10.5.242,110.10.5.243.
But for one more post, the most sacastic of all is surely you. :)
 
Hello rascals. For those who would be tempted by the adventure of successfully integrating a viable GlusterFS solution on a Proxmox cluster, I will soon concoct a summary document in markdown format.

This integration solution therefore allows:

1. to have a volume replicated on the 3 Proxmox nodes.
2. To be autonomous with respect to Proxmox.
3. works with 1 or 2 nodes remaining alive. Including the split-brain which is taken into account.
4. Preserves the integrity of the GlusterFS.

For the skeptics rascals, before asking questions, I invite them to test first.
And many thanks to those who encouraged me.:)
 
As explained @Dark26 , the integration of glusterfs in the 3 Proxmox nodes is done autonomously (independent of Proxmox if you prefer). I do not use the GlusterFs Storage feature you mention. But a simple directory Storage (/gluster).

So that you do not agree is a fact, but before that, you need to read what is written. It is a standalone integration of GlusterFS on a Proxmox cluster. So, the integration is done by a simple directory as previously described.

On all Proxmox nodes:

Code:
apt install curl gpg

mkdir -p /mnt/distributed/brick0 /gluster

curl https://download.gluster.org/pub/gluster/glusterfs/11/rsa.pub | gpg --dearmor > /usr/share/keyrings/glusterfs-archive-keyring.gpg

DEBID=$(grep 'VERSION_ID=' /etc/os-release | cut -d '=' -f 2 | tr -d '"')
DEBVER=$(grep 'VERSION=' /etc/os-release | grep -Eo '[a-z]+')
DEBARCH=$(dpkg --print-architecture)
echo "deb [signed-by=/usr/share/keyrings/glusterfs-archive-keyring.gpg] https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/${DEBID}/${DEBARCH}/apt ${DEBVER} main" | tee /etc/apt/sources.list.d/gluster.list

apt update
apt install glusterfs-server
systemctl start glusterd
systemctl enable glusterd

I chose a replicated glusterFs volume of 3.

Code:
gluster peer probe <ip node proxmox  1>
gluster peer probe <ip node proxmox  2>
gluster peer probe <ip node proxmox  3>
gluster peer status
gluster pool list

gluster volume create gfs0 replica 3 transport tcp  <ip node proxmox  1>:/mnt/distributed/brick0 <ip node proxmox  2>:/mnt/distributed/brick0 <ip node proxmox  3>:/mnt/distributed/brick0
gluster volume start gfs0


#My tunning for my infra :

gluster volume set gfs0 cluster.shd-max-threads 4
gluster volume set gfs0 network.ping-timeout 5
gluster vol set gfs0 cluster.heal-timeout 10
gluster volume heal gfs0 enable
gluster volume set gfs0 cluster.quorum-type none
gluster vol set gfs0 cluster.quorum-reads false
gluster volume set gfs0 network.ping-timeout 30
gluster volume set gfs0 cluster.favorite-child-policy mtime
gluster volume heal gfs0 granular-entry-heal disable
gluster volume set gfs0 cluster.data-self-heal-algorithm diff

gluster volume set gfs0 auth.allow <ip networkA node >.*,<ip networkB node >.*

Create service :
Code:
cat /etc/systemd/system/glusterfs-mount.service
[Unit]
Description=Mount GlusterFS Volume
After=network-online.target glusterd.service
Wants=network-online.target glusterd.service

[Service]
Type=idle
ExecStart=/bin/mount -t glusterfs localhost:/gfs0 /gluster
RemainAfterExit=yes
#  Redémarre le service uniquement en cas d’échec
Restart=on-failure
# Attend 10 secondes avant de tenter un redémarrage.
RestartSec=10
# Définit une fenêtre de 10 minutes (600 secondes) pour le comptage des tentatives de redémarrage.
StartLimitIntervalSec=600
#  Limite le nombre de tentatives de redémarrage à 10 dans la fenêtre définie
StartLimitBurst=10

[Install]
WantedBy=multi-user.target

Code:
systemctl daemon-reload
systemctl enable glusterfs-mount.service
systemctl start glusterfs-mount.service
Code:
df -h

localhost:/gfs0 419G  4.2G  398G   2% /gluster

Code:
cat /etc/pve/storage.cfg
...
dir: gluster
        path /gluster
        content images
        prune-backups keep-all=1
        shared 1


Hi, were you able to get it to work properly, meaning if node 1 goes down, node 2 comes back up without any issues? How do the restart scripts know if it's the node with the votes to bring the virtual machines up? I want to implement the Gluster directory within ZFS volumes. Cheers!
 
Hello @jesusdleguiza . Yes. When a node reboots, a service (gluster_auto_run_vm.service) checks if the GlusterFS is healthy. If it is, it will start the node's VMs with the gluster tag. The disks of these VMs are of course declared on the gluster storage.

Be careful, you must not start your VMs when the node reboots. This must be disabled. It is the service (gluster_auto_run_vm.service) that manages this.

I strongly invite you to test. I have attached my notes (summaries) in markdown format.

And after testing that it works, I invite everyone to improve this solution. For example, the case where VMs are declared in HA
 
Last edited:
  • Like
Reactions: waltar
But as some said at the beginning of this thread, this is not the ideal solution. The Proxmox team needs to improve the management of the integration of a GlusterFS cluster, as they did admirably well with Ceph.
 
My notes (summaries) in markdown format. (upgrade)
 

Attachments

  • synthese.txt
    13.1 KB · Views: 4
G
Hello @jesusdleguiza . Yes. When a node reboots, a service (gluster_auto_run_vm.service) checks if the GlusterFS is healthy. If it is, it will start the node's VMs with the gluster tag. The disks of these VMs are of course declared on the gluster storage.

Be careful, you must not start your VMs when the node reboots. This must be disabled. It is the service (gluster_auto_run_vm.service) that manages this.

I strongly invite you to test. I have attached my notes (summaries) in markdown format.

And after testing that it works, I invite everyone to improve this solution. For example, the case where VMs are declared in HA
Great, I'm going to try your solution, what do you think about linstor, it has a similar purpose
 
G

Great, I'm going to try your solution, what do you think about linstor, it has a similar purpose
I don't know Linstor. But if it offers features to manage GlusterFS in Proxmox nodes including replication and high availability. If it is based on the same business model as Proxmox. Then why not.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!