Container on gluster volume not possible?

Dark26

Renowned Member
Nov 27, 2017
273
25
68
47
Bonjour,

It's not possible to create container on gluster storage via the gui. Is there a reason for this ?

I found posts years ago that it will be slow or something like that.

Someone can confirm ?

Is it possible to bypass this in the config file ?

PS : for the record my gluster volume is on SSD,

actually the only solution i found, is to activate the nfs on the gluster volume , and mount it with this configuration. But if the main serveur is down then it's not working anymore.

thanks

Merci
 
I found posts years ago that it will be slow or something like that.
Yes when we test it over one year ago it was horrible slow.
Factor 10 compared to qemu.
 
Thanks for the quick response.

Is there a way, just for testing to have the choice in the GUI ? a line in a conf file to comment / uncomment ?

a will try to modufy directly in the file, but for the live migration it will be difficult.

thanks for your support

PS: i test to modify directly the conf file, and it doesn't work..
 
Last edited:
You can try to add rootdir in file
/usr/share/perl5/PVE/Storage/GlusterfsPlugin.pm line 101

This line should look like this

content => [ { images => 1, vztmpl => 1, iso => 1, backup => 1},

change to

content => [ { images => 1, rootdir => 1 , vztmpl => 1, iso => 1, backup => 1},

I'm not 100% sure if this is enough, but it should work for testing.
 
I modify the line, but it's the same. Do i have to restart a daemon, in this case, which one ?

thanks
 
i try again, i wasn't on the right server, but it doesn't matter.

i also modify the file /usr/share/perl5/PVE/LXC.pm

so now i have :

if ($scfg->{type} eq 'dir' || $scfg->{type} eq 'nfs' || $scfg->{type} eq 'glusterfs') {
if ($size_kb > 0) {

the gui let me create a container on the gluster volume, i see the .raw file but it can't format it :

mkfs.ext4 -O mmp -E 'root_owner=0:0' gluster://192.168.1.171:/ProxmoxHD/images/245/vm-245-disk-3.raw
mke2fs 1.43.4 (31-Jan-2017)
Le fichier gluster://192.168.1.171:/ProxmoxHD/images/245/vm-245-disk-3.raw n'existe pas et aucune taille n'a été spécifiée.



Task viewer: CT 245 - Create
Output
Status
Stop
Formatting 'gluster://192.168.1.171/ProxmoxHD/images/245/vm-245-disk-3.raw', fmt=raw size=2147483648
mke2fs 1.43.4 (31-Jan-2017)
The file gluster://192.168.1.171/ProxmoxHD/images/245/vm-245-disk-3.raw does not exist and no size was specified.
TASK ERROR: command 'mkfs.ext4 -O mmp -E 'root_owner=0:0' gluster://192.168.1.171/ProxmoxHD/images/245/vm-245-disk-3.raw' failed: exit code 1


but the file is here ( we see my 3 tests ):

root@prox7:/mnt/pve/ProxmoxHDG/images/245# ls -lah
total 8,0K
drwxr----- 2 root root 4,0K janv. 29 21:32 .
drwxr-xr-x 19 root root 4,0K janv. 29 21:19 ..
-rw------- 1 root root 1,0G janv. 29 21:19 vm-245-disk-1.raw
-rw------- 1 root root 1,0G janv. 29 21:21 vm-245-disk-2.raw
-rw------- 1 root root 2,0G janv. 29 21:32 vm-245-disk-3.raw

Avy idea ?
 
You can also try as a Directory Storage and use the GlusterFS mount point.
 
You can also try as a Directory Storage and use the GlusterFS mount point.

i think i will try this, but how with this method, it handle migration ( live or not ) between hosts?

Possible? Because i think with local storage the migration ( a least the live ) is not possible.
 
You have to set the shared flag on the storage.
 
Well, i think the local storage won't give me what i want.

I have 2 Proxmox, ( debian install) and on each one , i have a glusterfs serveur in replica 2 configuration running on each. i want the possibility to migrate all the machines to one server, and shut down the other. With local storage, even with the shared flag, i don't see how it can work.

i don't know how proxmox handle the migration when the gluster volume is mount via fstab on each node.

but i think a found an alternative.

i mount the gluster volume in format NFS after activate the possibilty for the gluster volume

but instead of putting the ip adress or the name of one of the two servers, i create an "aliases" name in each file /etc/hosts on the two node who is pointing to 127.0.0.1

so i have this :

nfs: ssd_emmc_nfs
export /ssd_emmc
path /mnt/pve/ssd_emmc_nfs
server glusterct
content backup,vztmpl,iso,images,rootdir
maxfiles 1
options vers=3

in the /etc/hosts of each node i have :
127.0.0.1 localhost glusterct

and with this, each node mount the nfs on his own side
 
Well, i think the local storage won't give me what i want.

I have 2 Proxmox, ( debian install) and on each one , i have a glusterfs serveur in replica 2 configuration running on each. i want the possibility to migrate all the machines to one server, and shut down the other. With local storage, even with the shared flag, i don't see how it can work.

i don't know how proxmox handle the migration when the gluster volume is mount via fstab on each node.

but i think a found an alternative.

i mount the gluster volume in format NFS after activate the possibilty for the gluster volume

but instead of putting the ip adress or the name of one of the two servers, i create an "aliases" name in each file /etc/hosts on the two node who is pointing to 127.0.0.1

so i have this :

nfs: ssd_emmc_nfs
export /ssd_emmc
path /mnt/pve/ssd_emmc_nfs
server glusterct
content backup,vztmpl,iso,images,rootdir
maxfiles 1
options vers=3

in the /etc/hosts of each node i have :
127.0.0.1 localhost glusterct

and with this, each node mount the nfs on his own side
 
i mount the gluster volume in format NFS after activate the possibilty for the gluster volume

but instead of putting the ip adress or the name of one of the two servers, i create an "aliases" name in each file /etc/hosts on the two node who is pointing to 127.0.0.1
Dark26, is this solution working for you?

What do you have in your fstab for the mount point?
 
i have nothing in the fstab, the shares are mount in proxmox directly.

this is what i have in the file /etc/pve/storage.cfg

glusterfs: ssd_emmc
path /mnt/pve/ssd_emmc
volume ssd_emmc
content images
server 192.168.1.171
server2 192.168.1.141

nfs: ssd_emmc_nfs
export /ssd_emmc
path /mnt/pve/ssd_emmc_nfs
server glusterct
content images,backup,iso,rootdir,vztmpl
maxfiles 1
options vers=3



and in each serveur in /etc/hosts i have this line :

127.0.0.1 localhost glusterct


hope this help
 
Well, i use is at home, so it' not good test for production.

A have issue, i didn't know which one is the faullty ( hardware / gluster / proxmox ), the system freeze accessing the stockage, and block the full node.

I think it's stockage problem.
 
At the moment I mainly use VMs but thought about Containers, that why I ask. Block full node sounds strange. Gluster is healthy?
 
Yesterday, i put all the Containers on a différent gluster volume, so far so good.

I will try later later to put them back on my emmc storage...
 
You can try ( i try on ma lab at home ) to install nfs-ganaesha-gluster, over the gluster serveur.

So far so good for me.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!