Proxmox GlusterFS performance tests


Oct 16, 2018
Sao Paulo - Brazil

This message is to ask you guys if anybody else can share gluster experience as a Proxmox shared storage.

I decided to give GlusterFS a try and the performance was really bad when I started.

As I have only one environment where I can test it, all the tests have the same machines and I only change configurations.

There are 4 physical machines : 2 Proxmox nodes and 2 NAS

PVE-01 PVE-02 NAS-01 NAS-02

I normally run VMs "hosted" in NAS-02 which is newer and faster and NAS-01 which is old and slower is a backup server

Both NAS are running OMV ( Open Media Vault 4.X ) configured to provide NFS and CIFS

PVE-01 and PVE-02 are connected to NAS-02 using NFS and VM performance is good.

I installed GlusterFS on both NAS machines and configure it as replicated ( it was really easy ).

There is a separate network with jumbo frames enabled to interconnect NAS-01 NAS-02 gluster "brick" traffic.

I know there is no 3rd node ( arbiter ) but this is a test environment in a local LAN.

Next step I configure gluster storage from PVE-01 Datacenter and restore a VM backup to this gluster storage.

VM works but very slow.

After some more tests I decided to change NAS configuration and created an OMV NFS on top of "local" GlusterFS on NAS-01.

I removed Gluster configuration from PVE-01 Datacenter and create a NFS Storage pointing to NAS-02.

Again restored a VM backup and it works as fast as the previous NFS share accessing local disks.

Then I decided to test again gluster storage from PVE-01 Datacenter and the performance was really bad.

I returned to GlusterFS over NFS ( not NFS Ganesha ) as I'm running OMV NFS and performance is back to normal.

Can anyone comment / shed a light on what else can be done to diagnose why GlusterFS client connection seems to be so slow ?

Direct connection using gluster-client its supposed to be faster or as fast as the NFS connection.

A very important point here is that I'm running GlusterFS version 6.3.1 and I upgraded gluster-client and gluster-common on both PVE nodes.

So when I created a gluster storage from PVE-01 Datacenter it was running on the new version of gluster-client.

By default PVE uses version 3.8.8 for gluster-client which is not compatible with gluster-server version 6.3.1.

It is also important to note that I always checked that the files were created in glusterfs. This is true for PVE direct connection and PVE NFS connection.


GlusterFS is using shards

root@nas-02:~# gluster volume info
Volume Name: gluster01
Type: Replicate
Volume ID: c82432fc-41ec-422f-8898-9b9fa3ce3b3f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: gpeer-01.XYZ
Brick2: gpeer-02.XYZ
Options Reconfigured:
features.shard-block-size: 32MB
features.shard: enable
client.event-threads: 2
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on

This is the mount command used by OMV ( remote mount package ) to access gluster on local node

gpeer-02:/gluster01 on /srv/9733531f-71aa-42d1-b456-0d00e9942a89 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,allow_other,max_read=131072,_netdev)

Thank you for your time and attention reading this.

Refrences :


Ricardo Jorge

In no way I'm associated with any companies and / or projects depicted in this message.

Thank you @Dominic for the update.

At least for me the way GlusterFS "balance" the nodes is too slow to use it as an online storage.

For VM backup and other backup needs its a good price / security option especially when you consider remote nodes ( geographically distributed nodes ).

Its also easy to configure, maintain and upgrade.


Ricardo Jorge
At least for me the way GlusterFS "balance" the nodes is too slow to use it as an online storage.

Have you tried Ceph already? We did a benchmark with it not too long ago.
I used GlusterFS last year on one of ours OpenStack Cluster, the performance is good while there is no problem on any brick node... my experience says that if you are going to use glusterfs with large RAW files, be aware that it will freeze your brain when a file-healing occurs at one of your brick nodes running glusterfs... it will take DAYS depending the number of VMs and the size of the guest disk images you have to glusterfs recover from a split-brain or node failure. Also, dont use any kind of file storage over network if you don't have a reliable network (10GBit or faster)..

If I could give you an advice... I would say, move to a BLOCK storage filesystem instead, like CEPH RDB.


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!