local-lvm troubles

Sharpist

New Member
Nov 30, 2025
4
0
1
Hello All,

First time poster here.

I have 2 hosts in a cluster
1 machine which I just added to the cluster has the RAID controller on /dev/sda and proxmox on /dev/sdb
The local-lvm that was made during install was only 40 gigs so I want to move to /dev/sda raid which it 5TB

I found this out when I tried to migrate a VM over.

I deleted local-lvm and then remade it however its not seeing pve/data

I have tried make a lvm-thin but from the web interface it adds as the group volume name of data instead of pve.
I tried from the command line too add the lvm with lvcreate without any luck

Can somebody point me in the right direction?
 
Why is the name important? Also please share

Bash:
lsblk -o+FSTYPE,LABEL,MODEL
pvs
vgs
lvs
 
Why is the name important? Also please share

Code:
L
pvs
vgs
lvs
Hi Thanks for the reply.

As far as the name, If its not local-lvm VM's wont migrate between hosts?

root@pve1:/etc/pve# lsblk -o+FSTYPE,LABEL,MODE
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS FSTYPE LABEL MODE
sda 8:0 0 5.2T 0 disk brw-rw----
└─sda1 8:1 0 5.2T 0 part LVM2_member brw-rw----
├─data-data_tmeta 252:3 0 15.9G 0 lvm brw-rw----
│ └─data-data 252:5 0 5.2T 0 lvm brw-rw----
└─data-data_tdata 252:4 0 5.2T 0 lvm brw-rw----
└─data-data 252:5 0 5.2T 0 lvm brw-rw----
sdb 8:16 0 119.2G 0 disk brw-rw----
├─sdb1 8:17 0 1007K 0 part brw-rw----
├─sdb2 8:18 0 1G 0 part /boot/efi vfat brw-rw----
└─sdb3 8:19 0 118.2G 0 part LVM2_member brw-rw----
├─pve-swap 252:0 0 8G 0 lvm [SWAP] swap brw-rw----
├─pve-root 252:1 0 39.5G 0 lvm / ext4 brw-rw----
└─pve-data 252:2 0 70.6G 0 lvm brw-rw----


root@pve1:/etc/pve# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 data lvm2 a-- <5.24t 376.00m
/dev/sdb3 pve lvm2 a-- <118.18g 0
root@pve1:/etc/pve# vgs
VG #PV #LV #SN Attr VSize VFree
data 1 1 0 wz--n- <5.24t 376.00m
pve 1 3 0 wz--n- <118.18g 0
root@pve1:/etc/pve# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data data twi-a-tz-- 5.20t 0.00 0.23
data pve -wi-a----- 70.63g
root pve -wi-ao---- 39.54g
swap pve -wi-ao---- 8.00g
 
This is hard to read without code blocks. local-lvm is just the storage name. It can refer to whatever VG/LV you want.
 
This is hard to read without code blocks. local-lvm is just the storage name. It can refer to whatever VG/LV you want.



Yep.. but if it doesn't match the other nodes in the cluster you cannot migrate VM's and the other node is setup like that.

My idea was to get the raid (dev/sda1) to be a thin volume named as data in the pve volume group which should fix the issue and thats where im stuck
 
Last edited:
You can't unless you add the disk to the group which I do not recommend.
That is the conclusion I come to... I think I will rebuild the node and make sure the partitioning of the RAID is compatible to what I need

Thanks for your help
 
I recommend ZFS if possible. With this you can use replication as well making migration very fast and HA possible.
 
  • Like
Reactions: UdoB