Replacing disk on GlusterFS storage in Proxmox

headwhacker

Member
Apr 20, 2021
4
0
6
47
I have a 3 node Proxmox cluster. The same nodes host a glusterfs volume setup as replicated 1 x 3 brick. Each brick on each host has 3 SATA SSDs managed by LVM and formatted as XFS.

On each node one of the SSD is wearing out faster than the other 2 SSD drives. On one host, it's above 80% and looking at the trend it might be a couple weeks or a month tops before wear level goes 100%. Hence, I need to prepare to replace each drive before wear level goes 100%.

I have not done this before. I am looking for suggestion on how to do this safely and quickly. So far I see 2 approaches.

1. At brick level. S for each host, remove the brick from the cluster. Replace the disk then add the brick back to the cluster. Repeat until each node has the worn out disk is replaced.

2. At the LVM level. Add the new disk to the volume group on each node. Migrate the data from the old disk to new one (pvmove). Then remove the old disk from the volume group (vgreduce/pvremove).

Which one is the better approach? Or is there a better one other than the 2 above I am thinking about?
 
GlusterFS (server) is not part of Proxmox VE stack (only client access), therefore you see limited answers.

Better ask https://www.gluster.org/
 
Thanks. Managed to replace the bad disk while the cluster is online. I just followed how disks are managed/replaced via LVM.

For quick reference these are the steps I have taken to replace the disk for my setup.

1. pvcreate /dev/<new disk>
2. vgextend <volumegroup> /dev/<new disk>
3. pvmove /dev/<old disk> /dev/<new disk>
4. vgreduce <volumegroup> /dev/<old disk>
5. pvremove /dev/<old disk>

Optional steps:
In case the new disk is larger than the old one and you want to allocate all space to the logical volume.
6. lvresize -L +<size of free space on the new disk> /dev/<volumegroup>/<lv_name>

note: frespace is what you see on "Free PE / Size" when you run vgdisplay

Finally since my volume is formatted with XFS I run this command
7. xfs_growfs /dev/<volumegroup>/<lv_name>

GlusterFS will automatically recognize the changes and adjust accordingly.
 
I use ZFS underneath GlusterFS, so changing disks are quite less painful than LVM. I don't use a replica 3 as I think it's a waste of space, so I'm with a distributed dispersed volumes.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!