unable to create glusterfs storage

Jeremy_M

New Member
Jul 17, 2019
4
0
1
41
Hello,
For testing before investing, I have a three nodes Proxmox cluster (packages above) and i want to mount an external glusterfs replica storage for my backups/ISO/ templates.
My glusterfs volume is configured on two seperated nodes installed in Debian 9. I used included glusterfs packages on debian repository.
When i try to mount Glusterfs volume in the Proxmox web ui, I cannot browse volume (without any error message) despite I use correct IPs(or their hostname) of Gluster nodes.
If i try to paste the name of my GFS volume and validate by the "add" button, i've this message in web UI :
Code:
create storage failed: error with cfs lock 'file-storage_cfg': mount error: Mount failed. Please check the log file for more details. (500)
In the log file, i've this but i can't find any help refering to this problem :
Code:
root@pve1:~# tail -f /var/log/glusterfs/mnt-gluster-vol.log
+------------------------------------------------------------------------------+
[2019-07-17 12:11:48.445796] I [MSGID: 108006] [afr-common.c:5650:afr_local_init] 0-gfs-dmz-test-vol-replicate-0: no subvolumes up
[2019-07-17 12:11:48.445942] E [fuse-bridge.c:4336:fuse_first_lookup] 0-fuse: first lookup on root failed (Transport endpoint is not connected)
[2019-07-17 12:11:48.447623] W [fuse-bridge.c:897:fuse_attr_cbk] 0-glusterfs-fuse: 2: LOOKUP() / => -1 (Transport endpoint is not connected)

No firewall configured on each cluster (pve & glusterfs).

Somebody can help me please ?

Proxmox installed packages :
Code:
proxmox-ve: 6.0-2 (running kernel: 4.15.18-18-pve) pve-manager: 6.0-4 (running version: 6.0-4/2a719255) pve-kernel-5.0: 6.0-5 pve-kernel-helper: 6.0-5 pve-kernel-4.15: 5.4-6 pve-kernel-5.0.15-1-pve: 5.0.15-1 pve-kernel-4.15.18-18-pve: 4.15.18-44 pve-kernel-4.15.18-17-pve: 4.15.18-43 pve-kernel-4.15.18-10-pve: 4.15.18-32 ceph-fuse: 12.2.11+dfsg1-2.1 corosync: 3.0.2-pve2 criu: 3.11-3 glusterfs-client: 5.5-3 ksm-control-daemon: 1.3-1 libjs-extjs: 6.0.1-10 libknet1: 1.10-pve1 libpve-access-control: 6.0-2 libpve-apiclient-perl: 3.0-2 libpve-common-perl: 6.0-2 libpve-guest-common-perl: 3.0-1 libpve-http-server-perl: 3.0-2 libpve-storage-perl: 6.0-5 libqb0: 1.0.5-1 lvm2: 2.03.02-pve3 lxc-pve: 3.1.0-61 lxcfs: 3.0.3-pve60 novnc-pve: 1.0.0-60 proxmox-mini-journalreader: 1.1-1 proxmox-widget-toolkit: 2.0-5 pve-cluster: 6.0-4 pve-container: 3.0-3 pve-docs: 6.0-4 pve-edk2-firmware: 2.20190614-1 pve-firewall: 4.0-5 pve-firmware: 3.0-2 pve-ha-manager: 3.0-2 pve-i18n: 2.0-2 pve-qemu-kvm: 4.0.0-3 pve-xtermjs: 3.13.2-1 qemu-server: 6.0-5 smartmontools: 7.0-pve2 spiceterm: 3.1-1 vncterm: 1.6-1 zfsutils-linux: 0.8.1-pve1

Glusterfs nodes :
Code:
root@gluster1:# cat /etc/debian_version
9.9
root@gluster1# glusterfs -V
glusterfs 3.8.8 built on Jan 11 2017 14:07:11

Glusterfs volume info :
Code:
root@gluster1# gluster volume info gfs-dmz-test-vol

Volume Name: gfs-dmz-test-vol
Type: Replicate
Volume ID: 29f747bb-962c-46fb-9a11-0a6dbee548aa
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster1:/data/gfs-dmz-test
Brick2: gluster2:/data/gfs-dmz-test
Options Reconfigured:
cluster.self-heal-daemon: enable
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
network.remote-dio: enable
cluster.eager-lock: enable
cluster.quorum-count: 1
cluster.quorum-type: fixed
cluster.server-quorum-type: server
network.ping-timeout: 5
performance.cache-size: 256MB
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
 
I've just rebuilt your setup (exception one PVE client and not a cluster) and it worked fine for me. As a start you could reboot your PVE server(s). The running kernel should be 5.
 
Are you sure that your volume is started?
Code:
gluster volume start gfs-dmz-test-vol
 
I have same issue.

Well, your are resurrecting a five years old thread with zero information about your situation. Do you really expect a helpful reply for your problem?

Please provide some information: the output of "pveversion -v", a description of your hardware, the number of nodes, where are your bricks, how is the network setup?

Probably you know this link: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#storage_glusterfs

Generic questions:
  • What do you want to do?
  • What did you actually do? Did you follow some guides?
  • What did you expect to happen?
  • What happened instead? Full(!) error messages please.
And as always: please use [code]...[/code]-tags for better readability.

You know that Gluster is probably not future proof? (Afaik!) I would take a look at Ceph or ZFS with replication instead. What is your reason to choose Gluster?

Disclaimer: I am NOT a Gluster user, so I probably can not help at all...
 
Last edited:
I solved the problem just few minues ago

The problem is: you can't(!) use two gluster peers in your gluster cluster (example: for lab testing). It works.., It shows in "OK" status, (peer OK, volume OK), but PVE would not add (connect) it as Datastore.

the "mount -t glusterfs gluster1:/gv0 /mnt/gluster-test" command wouldn't work as well.

And this is why: you need tree gluster peers. 3 nodes is the minimum required number of gluster nodes.
 
Last edited:
  • Like
Reactions: UdoB
Nope, looks like ". 3 nodes is the minimum required" is not the issue.
The issue both testlab PVE host in the cluster should be added to /etc/hosts or local DNS...even if the are already in the Cluster. Second node is added to existing Cluster not by DNS name, but by "Joint information" and can communicate well to each other without DNS or hosts file.. but Storage:Add GlusterFS dialog menu maps storage to host using their short DNS names from /etc/hosts in my case

I just reinstall my lab and /etc/hosts was clean... so I added new lines to /etc/hosts and Add Storage worked , because I killed cluster after I was trying to update glister client and server on Proxmox VE hosts. (what's not the case)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!