GlusterFS mount via web GUI and fstab, what the difference?

RRJ

Member
Apr 14, 2010
245
0
16
Estonia, Tallinn
Hello!

when I add gluster storage from Proxmox web GUI, I can install qemu guests
without any caching. The mount -l shows me this output:

Code:
stor1:HA-MED-PVE1-1T on /mnt/pve/HA-MED-PVE1-1T type fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)



But in this case, if 1 server is down (in case with HA brick), proxmox
won't be able to read configuration from the 1st nor second server (Proxmox
only knows about where it should failover for current running guests) and
does not able to create new machines using this kind of mount.

If I add mount line like this in fstab:
Code:
stor1:HA-MED-PVE1-1T /mnt/pve/HA-MED-PVE1-1T glusterfs
defaults,default_permissions,backupvolfile-server=stor2,direct-io-mode=enable,allow_other,max_read=131072
        0 0

with mount -l now:
Code:
stor1:HA-MED-PVE1-1T on /mnt/pve/HA-MED-PVE1-1T type fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)


I'm getting this error, when trying to start guests:

Code:
kvm: -drive
file=/mnt/pve/HA-MED-PVE1-1T/images/125/vm-125-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,aio=native,cache=none:
file system may not support O_DIRECT
What could be the difference?
 
Oh, I think, I understand now.
It is up to how Proxmox defines storage in sotrage.cfg and what library is used to read/write from storage, here is where the libgfapi comes into the game.

Ok, is there any plan to allow adding the failover server in proxmox gui? So it would be more like backupvolfile-server option in fstab?

Why we would need it?

First of: I'm speaking of glusterfs HA bricks.
It is true, that glsuterfs client should know only one server to get information about second one for failover in case of failure. During the adding process, it downloads the whole configuration file with second server address and name. No problems with that, it really works in proxmox at this momen.
BUT.
One adds a glusterfs storage with only 1 server name in configuration. When it goes down, Proxmox is not able to find that storage anymore. The running VM-s are failing over to the second server successfully, but one is not able to use the storage. Proxmox says: storage not available (and it is really not, as it is down). So Proxmox should know about second one to fail over to in case of down state. Or one has to stop all VMs, delete the damaged storage server from Proxmox GUI and add the other one (with second name), then start all VMs and continue using the storage. Here is the point.
 
I know those limits, but if you read

# man mount.glusterfs

It does not mention option 'backupvolfile-server',

and qemu 'glusterfs:/' urls does not allow to specify a backup server.

So I am unsure how to implement that. Maybe one can use DNS and setup 2 address records for the gluster server?
 
https://access.redhat.com/documenta...ap-Administration_Guide-GlusterFS_Client.html
here some information can be found about backupvolfile-server.

I think there is nothing to do with
qemu 'glusterfs:/' urls. At this moment I see the problem for promox to find the second server. May be some kind of check should be implemented. If server1 is down, use server2 from config file or something. qemu part works fine. Proxmox looses the storage as storage, not VM. VM-s are working during outage (they are failing over to alive node).
 

I think there is nothing to do with
qemu 'glusterfs:/' urls. At this moment I see the problem for promox to find the second server. May be some kind of check should be implemented. If server1 is down, use server2 from config file or something.

AFAIK glusterfs does not offer any utility to test if a gluster server if online. Just tell me if you know one?
Besides, I think this should be implemented inside qemu with 'glusterfs:/' urls (like sheepdog do).
 
just reading /usr/include/glusterfs/api/glfs.h, function glfs_set_volfile_server():

NOTE: This API is special, multiple calls to this function with different
volfile servers, port or transport-type would create a list of volfile
servers which would be polled during `volfile_fetch_attempts()`

So it should be possible to pass 2 servers - someone just needs to implement that with qemu ...
 
I wish I chose to developing in my university years :D
Update pls, when .deb-s are available in pvetest repository.
 
I'm not sure if I got the patch right, but it seems that it supports a single "backup server" only? This sounds good in a simple 2-way replicated scenario but Gluster has many more redundant topologies, like distributed-replicated on more than 2 nodes or N-way replication (which does not make much sense in itself but possible) or more complex, large topologies. Any of the Gluster servers in a fault-tolerant cluster can be a potential "backup" server. eg. the one the VMs can be migrated to, for example.
 
Thanx! I'll give them a try.
I think, that simple ping test is even more than enough.
In general, one installs gluster and proxmox in same separate subnet (10g in my case) so it is isolated and no dns nor other stuff is running there. So it is pretty obvious that if server does not answer, it is down.
thank you. I'll report later here.
 
Setting up pve-manager (3.2-29) ...
Restarting PVE Daemon: pvedaemonno such method 'PVE::API2::register_page_formatter'
Compilation failed in require at /usr/bin/pvedaemon line 16.
BEGIN failed--compilation aborted at /usr/bin/pvedaemon line 16.
(warning).
Restarting PVE API Proxy Server: pveproxyno such method 'PVE::API2::register_page_formatter'
Compilation failed in require at /usr/bin/pveproxy line 24.
BEGIN failed--compilation aborted at /usr/bin/pveproxy line 24.
(warning).
Restarting PVE SPICE Proxy Server: spiceproxy.
Restarting PVE Status Daemon: pvestatd.
root@sisemon:~# service pveproxy status
Usage: /etc/init.d/pveproxy {start|stop|restart|force-reload}
root@sisemon:~# service pveproxy start
Starting PVE API Proxy Server: pveproxyno such method 'PVE::API2::register_page_formatter'
Compilation failed in require at /usr/bin/pveproxy line 24.
BEGIN failed--compilation aborted at /usr/bin/pveproxy line 24.
(warning).


Failed to startup proxmox web ui.
Looking a way to rollback :)

atm commented out those lines... any more human way to install original packages back?
 
Last edited:
hm. I just can't get rid of pve-manager-3.2-29. I've removed the repository and made apt-get clean... still installs 3.2.29 with broken pveproxy
 
Ok. Got it back. But what if I'd like to use the new functionality with second server, what should be my steps to not break proxmox? :)
 
Ok. Got it back. But what if I'd like to use the new functionality with second server, what should be my steps to not break proxmox? :)

The pvetest repository is for testing only! You need to upgrade all packages, or use a stable repository.
 
Sad.
But atleast it worked for pve-qemu-kvm only :)
Ok, I'll wait till it will go out to stable.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!