GlusterFS on single server

liptech

Member
Jan 14, 2021
30
6
13
44
Brasil
Hello everyone!

I'm setting up GlusterFS in my Proxmox labs, aiming to have shared storage from a single node (the disks are all in one server running GlusterFS server), while the other nodes connect to this server to use its storage.

The basic GlusterFS configuration was successful: the other nodes can connect, and the storage is accessible and performs well.

However, I’m facing an issue when using the QCOW2 format for the VM disk images. When I select QCOW2, the VM data gets corrupted. On the other hand, when I use the RAW format, everything works perfectly.

I’d like to use the QCOW2 format because it allows for snapshots, which is not possible with the RAW format.

I’ve already tried using ZFS and EXT4 as the underlying file systems for GlusterFS, but the problem still persists.

Can anyone help me with this issue?
 
Sorry, Gluster is simply the wrong tool for working with a single server node - @spirit asking for NFS is just simpler and more reliable.

If you just use Gluster to glue together some devices/bricks: MDADM exists and is rock solid. With or w/o redundancy.

(Personally I am a ZFS fanboy for several reasons and would go that route of course.)
 
  • Like
Reactions: liptech and vshab
mdadm is not supported by the proxmox developers though: https://pve.proxmox.com/wiki/Software_RAID#mdraid

But of course (since Proxmox is basically Debian) it can be used for this.
Another question: Are you planning, to use the same directory as storage for each single-node-cluster? This is problematic, since it can lead to problems with overlapping vmids.
I would setup the system like this: disk-server with a NAS OS or Proxmox VE and a NAS OS vm (in this case you would need a dedicated HBA pci card for TrueNAS), setting up dedicated NFS, ISCSI or ZFS-over-ISCSI shares for each signle-cluster-node (both protocols are NOT secure so take care that your lab network is seperated from the rest of the world) and use these as shared storage on the nodes.
Hope that helps, Johannes.
 
  • Like
Reactions: liptech
mdadm is not supported by the proxmox developers though: https://pve.proxmox.com/wiki/Software_RAID#mdraid

But of course (since Proxmox is basically Debian) it can be used for this.
Another question: Are you planning, to use the same directory as storage for each single-node-cluster? This is problematic, since it can lead to problems with overlapping vmids.
I would setup the system like this: disk-server with a NAS OS or Proxmox VE and a NAS OS vm (in this case you would need a dedicated HBA pci card for TrueNAS), setting up dedicated NFS, ISCSI or ZFS-over-ISCSI shares for each signle-cluster-node (both protocols are NOT secure so take care that your lab network is seperated from the rest of the world) and use these as shared storage on the nodes.
Hope that helps, Johannes.

@Johannes S
My infrastructure consists of 4 machines and 1 manageable switch:

  • Machines 1 and 2: Each has three gigabit network cards: MGMT, Storage, and VMs.
  • Machine 3: It has four gigabit network cards, with one dedicated to MGMT and three aggregated (bonded) for Storage.
  • Machine 4: It functions as a backup server (Proxmox Backup Server - PBS) and has three gigabit network cards, with Storage using two interfaces.
Machines 1 and 2 connect to Machine 3 to access storage.

I’m considering migrating to NFS due to issues with Gluster, which isn’t providing the expected versatility and speed. Machine 3 also has a controller for managing MDADM.

Thanks, everyone, for the suggestions!
 

Attachments

  • Captura de tela de 2024-10-27 19-59-53.png
    Captura de tela de 2024-10-27 19-59-53.png
    40.9 KB · Views: 1
  • Captura de tela de 2024-10-27 19-59-07.png
    Captura de tela de 2024-10-27 19-59-07.png
    30 KB · Views: 1
why don't you do a simple nfs server ?
Large companies often opt for more complex solutions, and I enjoy studying these, as you never know who your next boss might be. That said, I do use NFS for simpler setups, though I find its performance somewhat limited for more demanding tasks.
 
Sorry, Gluster is simply the wrong tool for working with a single server node - @spirit asking for NFS is just simpler and more reliable.

If you just use Gluster to glue together some devices/bricks: MDADM exists and is rock solid. With or w/o redundancy.

(Personally I am a ZFS fanboy for several reasons and would go that route of course.)
I'm actually using ZFS behind Gluster to take advantage of its features. However, the issue I'm facing is specifically related to the use of Qcow2 files. Other than that, the combination of ZFS with Gluster has been efficient, but Qcow2 files are causing difficulties in my case.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!