GlusterFS: storage: invalid format - storage ID '' contains illegal characters

mr.x

Well-Known Member
Feb 16, 2010
49
0
46
Hi all,

first of all: Proxmox is great ! Keep on going !

While executing a backup task I can see the following error message.
Code:
Parameter verification failed.  (400)

[B]storage: invalid format - storage ID '' contains illegal characters[/B]

Couldn't find anything in the mailing list nor the forum, therefore I try it here.
Strange is the missing storage ID.

Code:
pveversion -v
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1

I setup up a glusterfs system with two nodes in replicate mode. Details below.

Code:
Server1:/var/log/glusterfs# gluster volume info

Volume Name: datastore
Type: Replicate
Volume ID: 3dcd805e-d289-443d-9cba-5bd03269c0b5
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: backup1:/data/gfs_block
Brick2: backup2:/data/gfs_block
Options Reconfigured:
diagnostics.latency-measurement: on
diagnostics.count-fop-hits: on

and

Code:
Server1:/var/log/glusterfs# gluster volume status
Status of volume: datastore
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick backup1:/data/gfs_block                           49153   Y       414959
Brick backup2:/data/gfs_block                           49153   Y       852056
NFS Server on localhost                                 2049    Y       415221
Self-heal Daemon on localhost                           N/A     Y       415228
NFS Server on backup2                                   2049    Y       852134
Self-heal Daemon on backup2                             N/A     Y       852141

There are no active volume tasks
GlusterFS is ok and working.

Glusterfs was successful integrated via GUI. Storage.cfg of one of the clients.

Code:
Server4:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content images,iso,vztmpl,rootdir
        maxfiles 0

glusterfs: backup1glusterfs
        volume datastore
        path /mnt/pve/backup1glusterfs
        content backup
        server 10.3.2.112
        maxfiles 10

Mount points looks also good
Code:
10.3.2.112:datastore on /mnt/pve/backup1glusterfs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

Does anybody knows where the issue is located? In the ML I found a patch but this was just for restoring of VM's.
Thanks for any help !

Br
Mr.X
 
upgrade to latest 3.2 and test again.

there were some glusterfs fixedf (see changelogs)
 
Hi,

is there a way to see the changes/commits/pulls in regards to glusterfs in git?

Br
Mr.X
 
Hi Tom,

thanks for the link. But I think this bug is in regards to ext3 or perl used in the GUI of proxmox as I can see that the Storage ID is also empty when using local storage.
As I do not have access to the commercial version I can not install 3.2 to see if the bug is gone. I would like to help and tried to figure out where the issue is related but the link you provided is in general about the wheezy glusterfs-client changelog and not the proxmox changelog.

BR
Mr.X
 
Hi Tom,

thanks for the link. But I think this bug is in regards to ext3 or perl used in the GUI of proxmox as I can see that the Storage ID is also empty when using local storage.
As I do not have access to the commercial version I can not install 3.2 to see if the bug is gone.

3.2 is available in all our repositories, see http://pve.proxmox.com/wiki/Package_repositories

I would like to help and tried to figure out where the issue is related but the link you provided is in general about the wheezy glusterfs-client changelog and not the proxmox changelog.

BR
Mr.X

Proxmox VE source code is on https://git.proxmox.com
 
Hi Tom,

ok, thanks again.
Will update to 3.2 and keep you posted.

Br
Jan
 
Hi Tom,

I've done the upgrade to 3.2-121. But it's still the same. The select box for storage ID in backup view still grey and empty. I cannot select any storage device.

Strange thing is I run 8 servers 6 of them have the issue with the grey select box for storage.

Any ideas how to fix it?


Br
Mr.X
 
Hi,

even an update to debian testing to get newer glusterfs software does not help.
In addition I get now daily mails with


/etc/cron.daily/mlocate:
/usr/bin/updatedb.mlocate: `/var/lib/mlocate/mlocate.db' is locked (probably by an earlier updatedb)
run-parts: /etc/cron.daily/mlocate exited with return code 1

Forum shows this happens before and with the help of google it points in the direction of an issue with the mounted filesystem with fuse and/or sshfs like used with glusterfs if I'm not mistaken.

BR
Mr.X