Messed up with extending LVM with software RAID 5 config

fennecus

Member
May 1, 2022
7
0
6
Hi guys
I started to do something but fear to have messed it up:
I run a Dell server with 2x 600 GB SSD in a hardware raid 1 config and 3x 12 TB HDD in a software raid 5 config. The SSDs carry the proxmox VE and the HDDs are for the VMs. Now I added a 16TB HDD and wanted to expand my SW RAID 5. I was googling around with increasing confusion. Finally I followed the this explanation: https://www.tecmint.com/extend-and-reduce-lvms-in-linux/. Here's what I already did:
1. Entered the new HDD into an empty slot (disk was immediatly recognised)
2. added the new HDD using fdisk (fdisk -c /dev/sdd) and created max partition
3. created new PV using pvcreate (pvcreated /dev/sdd1) (this was probably the mistake??)
4. checked with pvs:
1736606706768.png
(Note: it seems to me that besides the raid5 pool (md0) another "vmspace" was created based on the new hdd sdd1 (?)
5. extended volume group with vgextend (vgextend vmspace /dev/sdd1)
6. checked with vgs:
1736606896496.png
7. pvscan:
1736606982306.png
8. I then wanted to expand the logical volume with lvextend which resulted in the following error:
1736607864342.png

Thanks for any inputs on how to finish the process - the current lsblk gives the following output:
1736607964905.png
 
Step 2 wasn't necessary while step 3 was real bad as you put a single device into the lvm volumepool vmspace baked up of raid5, so when yor 16TB hdd dies your volumepool too. Nowadays the new hdd may still empty yet and these steps are to go in general:
Have a actual backup of your vm's and pve (hopefully not needed).
Remove sdd1 from vmspace with lvremove.
Add sdd or sdd1 to your md0 with mdadm.
pvscan vmspace to new bigger md0.
Resize your volumepool with lvextend.
cat /proc/mdstat ?
lvs ?
lvdisplay ?
 
  • Like
Reactions: Kingneutron
Thanks waltar... appreciated it!
Already fail with step 1 of your solution:

LVS:
1736612684771.png
and lvdisplay:
1736612718329.png
1736612755320.png
1736612771619.png

So there is no sdd1 within vmspace existing, hence the following error:
1736612855237.png
 
pvremove /dev/sdd1
dd if=/dev/zero of=/dev/sdd bs=1024k count=10240
mdadm --add /dev/md0 /dev/sdd
mdadm --grow --raid-devices=4 /dev/md0
cat /proc/mdstat
pvscan
pvs
vgs
 
Thanks... I didn't do lines 1 & 2 (so continued with sdd1)
but did the rest and now it is growing the raid with an estimate of 2484 minutes... so talk to you next week...
Thank you very much so far!
 
Your other 3 hdd's didn't have a partition "1" either, see cat /proc/mdstat, with sdd1 you might end up unaligned and have unneeded label.
The following cmd's are still all doable without waiting for raid rebuild ending.
But fine it's done.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!