Shared GlusterFS not expanding after additional drives

Adi M

Active Member
Mar 1, 2020
40
3
28
52
My Virtual Environment is V 5.4-13. As an external storage I use GlusterFS.
No I added new drives to have 6TB as described in other place.
Output of df -Th
Code:
Dateisystem                  Typ      Größe Benutzt Verf. Verw% Eingehängt auf
    ...
/dev/mapper/sto02--vg-pool01 ext4      5.4T    2.5T  2.7T   48% /data/pool01

This should be correct.

Now on Web-Interface I see the wrong (old) summary:

1583082240515.png
Have I to reload anything in ProxMox or how can I fix this?

EDIT:
on pve01 (as target) I typed df -Th and get
Code:
Dateisystem                Typ            Größe Benutzt Verf. Verw% Eingehängt auf
    ...
192.168.201.20:share01     fuse.glusterfs  2.7T    2.5T   89G   97% /mnt/pve/share02

So that means there is not sharing full size of disk. What am I missing?
 
Last edited:
On your Gluster Server, what is the output of gluster volume status all detail?
 
At the moment, sto02 (first brick) is expanted:

Code:
Status of volume: share01
------------------------------------------------------------------------------
Brick                : Brick sto02.wmc.sto:/data/pool01/share
TCP Port             : 49152
RDMA Port            : 0
Online               : Y
Pid                  : 1462
File System          : ext4
Device               : /dev/mapper/sto02--vg-pool01
Mount Options        : rw,noatime,nobarrier,errors=remount-ro,commit=60,stripe=128,data=writeback
Inode Size           : 256
Disk Space Free      : 2.9TB
Total Disk Space     : 5.3TB
Inode Count          : 362954752
Free Inodes          : 362762974
------------------------------------------------------------------------------
Brick                : Brick sto01.wmc.sto:/data/pool01/share
TCP Port             : 49152
RDMA Port            : 0
Online               : Y
Pid                  : 1460
File System          : ext4
Device               : /dev/mapper/sto01--vg-pool01
Mount Options        : rw,noatime,nobarrier,errors=remount-ro,commit=60,stripe=128,data=writeback
Inode Size           : 256
Disk Space Free      : 203.1GB
Total Disk Space     : 2.6TB
Inode Count          : 179814400
Free Inodes          : 179630138

EDIT:
Interesting. I stopped all CTs ans WMs and shutdown first storage (sto01 in me case for preparing to expand storage) . After a While I started CTs and VMs again to force switching to secondary storage. Now I see the correct size.
 
Last edited:
  • Like
Reactions: Dominic