Repartitioning local LVM drive possible?

RT3

New Member
Apr 15, 2024
4
2
3
Updated-edit on bottom

Hey I'm pretty new to proxmox (and zfs) been using it for about a month or two and really like it so just wanted to say thank you right away. However there are a couple things I don't fully understand, I think it will be clear after I explain what I'm trying to do and my hesitations to do so.

Essentially in the most basic way I can describe it : I want to reparation my local nvme drive on which proxmox is installed to have another partition that will be used as a zfs log drive/partition.
I have 1 500GB nvme drive which proxmox , bios and swap partitions exist. Along with that I have 5 4TB drives in a zfs pool for storage. the output for lsblk is as follows:
Code:
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                            8:0    0   3.6T  0 disk
├─sda1                         8:1    0   3.6T  0 part
└─sda9                         8:9    0     8M  0 part
sdb                            8:16   0   3.6T  0 disk
├─sdb1                         8:17   0   3.6T  0 part
└─sdb9                         8:25   0     8M  0 part
sdc                            8:32   0   3.6T  0 disk
├─sdc1                         8:33   0   3.6T  0 part
└─sdc9                         8:41   0     8M  0 part
sdd                            8:48   0   3.6T  0 disk
├─sdd1                         8:49   0   3.6T  0 part
└─sdd9                         8:57   0     8M  0 part
sde                            8:64   0   3.6T  0 disk
├─sde1                         8:65   0   3.6T  0 part
└─sde9                         8:73   0     8M  0 part
zd16                         230:16   0    32G  0 disk
zd32                         230:32   0    32G  0 disk
nvme0n1                      259:0    0 465.8G  0 disk
├─nvme0n1p1                  259:1    0  1007K  0 part
├─nvme0n1p2                  259:2    0     1G  0 part /boot/efi
└─nvme0n1p3                  259:3    0 464.8G  0 part
  ├─pve-swap                 252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           252:2    0   3.4G  0 lvm
  │ └─pve-data-tpool         252:4    0 337.9G  0 lvm
  │   ├─pve-data             252:5    0 337.9G  1 lvm
  │   └─pve-vm--101--disk--0 252:6    0    32G  0 lvm
  └─pve-data_tdata           252:3    0 337.9G  0 lvm
    └─pve-data-tpool         252:4    0 337.9G  0 lvm
      ├─pve-data             252:5    0 337.9G  1 lvm
      └─pve-vm--101--disk--0 252:6    0    32G  0 lvm

I was going to just boot into a live linux environment or gparted and decrease the size of partition 3 on nvme0n1 by 100-200 GB and then create 2 new partitions from that space, 1 like i mentioned to hold the zfs intention logs in and the other either into a new partition for future use so I don't need to do this again or left unallocated again for future use so I can just extend another partition into it in case the need arises.

Now the thing that prevents me from doing this is I can swear I remember reading something during initial setup/install that the partition I define is final and cant be changed later but I could be wrong and the other reason I'm hesitant to do this is because when I go to datacenter -> Node (prxmx) -> Disks -> LVM . I see that there is 97% assigned to LV's ? I'm not entirely sure what that means exactly but when I see this and think about shrinking a partition I think well I cant shrink something that 97% full. But I know this isnt the case because when I go to Datacenter -> Node -> local (prxmx) i see only 9% full. So my question is what is the difference between what the 2 are displaying? Also what is the difference between local, local-LVM, and local LVM-Thin?

Here is a picture of my disks and what I am talking about:
XXX there was a picture here that i removed can add back if necessary XXX
instead I give you the output of pvs, lvs and vgs
Code:
root@prxmx:~# pvs
  PV             VG  Fmt  Attr PSize    PFree
  /dev/nvme0n1p3 pve lvm2 a--  <464.76g 16.00g
root@prxmx:~# vgs
  VG  #PV #LV #SN Attr   VSize    VFree
  pve   1   4   0 wz--n- <464.76g 16.00g
root@prxmx:~# lvs
  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- 337.86g             1.43   0.54                         
  root          pve -wi-ao----  96.00g                                                 
  swap          pve -wi-ao----   8.00g                                                 
  vm-101-disk-0 pve Vwi-aotz--  32.00g data        15.12

So I guess it boils down to am I ok just repartitioning nvme0n1p3 or partition 3 of disk nvme0n1? Or is there something I need to do before hand or is this not possible at all without a fresh reinstall?

EDIT:
I've been doing some research and I think I have a better understanding of proxmox storage however it looks like because I have an LVM-thin storage device that I cant do this without first removing the LVM-storage device and then creating a new one as LVM-thin storage is not resizable? I have all of my VM and container data on the ZFS pool except for one, my most crucial VM, so unfortunately it looks like I am going to need to recreate it, right?

EDIT2:
Would the following commands work? I got the base/original from this tutorial on extending LVM storage and altered to my needs I just wanted to confirm I'm not going to screw things up here.
While Ima doing this i figured it would be a good idea to decrease my root logical volume down to 30Gb as i currently have it set to like 90 and theres no way I would ever use that much space and it's better allocated elsewhere.

SO, for fist section I would either
1) use gparted live env for this first section
or
2) boot into linux live environment (probably arch because btw I use arch) and do following
Code:
fdisk -l
# edit partitions with fdisk, change device id as needed
fdisk /dev/nvme0n1
# print partition table,  delete a partition, enter the lvm partition number - create a new partition - enter the new partition number, same as the number deleted
# accept the default first sector, last sector= +300G - n=dont remove the LVM signature , set the partition type, partition number, type to Linux LVM, write the changes
p - d - 3 - n - 3 - enter - +300G - n - t - 3 - 30
# now create 2 paritions, 1 for the zfs log partition (partition 4-type bf=solaris) and extra partition for future use
n - 4/enter - enter  - +50G , t , bf , n , 5/enter , enter , t , 83
# and write everything to disk
w
# confirm
fdisk -l

then I would boot proxmox and run these commands

Code:
# resize the existing physical volume
pvresize /dev/nvme0n1 --setphysicalvolumesize 300G
# decrease pve-root logical volume to 30G as we dont need the original 96G
lvresize -L -66GB /dev/pve/root
# resize the underlying file system
resize2fs /dev/mapper/pve-root
# list logical volumes, noting root is now 30GB
lvdisplay
# decrease/extend the data to 100% available free space
lvextend -l +100%FREE pve/data
# list logical volumes, noting root is now 30G and data ~237
lvdisplay

and after running all those commands I would expect the output of pvs, lvs and vgs to go from this
Code:
root@prxmx:~# pvs
  PV             VG  Fmt  Attr PSize    PFree
  /dev/nvme0n1p3 pve lvm2 a--  <464.76g 16.00g
root@prxmx:~# vgs
  VG  #PV #LV #SN Attr   VSize    VFree
  pve   1   4   0 wz--n- <464.76g 16.00g
root@prxmx:~# lvs
  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- 337.86g             1.43   0.54                         
  root          pve -wi-ao----  96.00g                                                 
  swap          pve -wi-ao----   8.00g                                                 
  vm-101-disk-0 pve Vwi-aotz--  32.00g data        15.12

to this - well hopefully something close to this

Code:
root@prxmx:~# pvs
  PV             VG  Fmt  Attr PSize    PFree
  /dev/nvme0n1p3 pve lvm2 a--  <300g    NOT_SURE_ABOUT_THIS
root@prxmx:~# vgs
  VG  #PV #LV #SN Attr   VSize    VFree
  pve   1   4   0 wz--n- <300g    OT_SURE_ABOUT_THIS
root@prxmx:~# lvs
  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- 237.86g             1.43   0.54                          
  root          pve -wi-ao----  30.00g                                                  
  swap          pve -wi-ao----   8.00g                                                  
  vm-101-disk-0 pve Vwi-aotz--  32.00g data        15.12

Does this seem like a good idea? Is there something I'm missing? Will Promox freak once it boots and realizes the volumes/drives aren't the sizes it expects?

Final EDIT:
in the worst case scenario I have to reinstall proxmox to do this, what is the best approach? I don't have anything on the local storage besides proxmox itself and 1VMs data, which I can probably recreate if absolute necessary I would rather just avoid doing that. My bigger concern is the ZFS drive/pool , if I reinstall is there anything I need to save/backup in order to recreate the ZFS pool or can I simply plug the drives in and they will be automatically found and setup, I've only ever used mdam for RAID before and if I remember correctly recreating the RAID wasnt an issue, at the worst it was one or two commands to recreate /etc/fstab or something along those lines and thats what Im hoping for, of course if this is the only option.

Also open to any other suggestions on what I should do instead. The main reason I want to do this is because I need a postgresql DB and as I was reading about the setup it I came to the conclusion I should probably use a log drive . Don't remember the specifics anymore but that was the conclusion, of course not absolutely necessary but I feel like its the "right" way to do things and therefore I am in this position.
 

Attachments

  • proxmoxdisks.jpg
    proxmoxdisks.jpg
    453.5 KB · Views: 5
Last edited:
It will be a lot simpler if you just dedicate another drive as a log device. Personally for L2ARC I've been using PNY 64GB thumbdrives, but for a DB you probably want something more like a Samsung T7 (USB3) for endurance.

Of course, if you have space inside the server case then by all means use a SATA drive or whatever you have room/budget for.
 
Last edited:
I never thought of using a USB stick as a log drive but that makes a lot of sense and I think I may end up taking that route if I cant repartition this drive so thanks!
I'm just kind of set on repartitioning this drive if its possible because I made different partitions the wrong size due to inexperience and miscalculations so it would be nice to be able to accomplish repartitioning and then resizing the volumes.
 
  • Like
Reactions: Kingneutron
Ok so from what I gather , because this is an LVM , I should be able to just resize my root and data volumes and by using the -r flag the filesystems stored on those logical volumes should be reduced as well , so the approach now is :
boot into a live environment that supports LVM and then
Code:
# resize logical volumes root to 30G and data to 230G - we'll leave metadata size alone 
# use -r to resize filesystem while doing so, notify that it is a thinpool and make it loud so I know i don fckd up
lvresize -L30G pve/root -r -vvvv --type thin-pool
lvresize -L230G pve/data -r -vvvv --type thin-pool
# now reducce the physical volume size to 272G (8swap + 30root +230data +3.4meta)
pvresize --setphysicalvolumesize 272G  /dev/nvme0n1p3
# now repartition the disk with fdisk as above
fdisk /dev/nvme0n1

Can anyone confirm this will work or if I'm even in the right ballpark?

What I'm not understanding is how the 'Thin' group plays into this is this something I need to consider on its own? It seems like it's just a type of logical volume from what I understand?

I feel like either I'm really dumb and this is really simple or this is just not possible and that's why no one is saying anything. Also sorry for the shameless bump.
 
I tried this in a pve VM and it failed:
Code:
lvresize -L207G pve/data -r -vvvv --type thin-pool

I took out everything after -vvvv and it worked OK.

Srsly tho, try installing webmin (runs on port 10000) - it has a web dashboard where you can see more of what is going on with lvm graphically. There is also weLees visualLVM

Best thing to do is recreate your PVE host install in a VM and try whatever you like in there, AFTER making a snapshot. And take notes on what works. If you try this for the 1st time on your main physical node, 98% chance something is gonna go sideways and you'll end up reinstalling.
 
Interesting Ive never heard of webmin or weLess visualVLM so will def check those out. And yea I think your right I might just have to stick to usb stick you recommended or recreate the volume.

The one question I do have left is about the zfs pool. Let's say I recreate the LVM and reinstall proxmox can I just remount/recreate the zfs pool and have all the data be available again? How does that work? Do i need to backup some config and reload it when I am recreating it or will it just know from data stored in some kind of metadata or special sector?

Anyway thanks a lot for all the input I really appreciate it and especially trying out what I was thinking about in a VM it's more effort than I expected anyone to actually put in so I cant really express how truly grateful I am for it, so again thank you for everything.
 
Last edited:
  • Like
Reactions: Kingneutron
> The one question I do have left is about the zfs pool. Let's say I recreate the LVM and reinstall proxmox can I just remount/recreate the zfs pool and have all the data be available again?

As long as the zpool was exported properly on shutdown/reboot and the disk(s) are still attached to the system, it should automagically re-import. You should not have to recreate it.

You will probably have to re-add it under Datacenter / Storage on a fresh install.

If it doesn't import, do ' zpool import -a -f -d /dev/disk/by-id ' (or another suitable long-form /dev/disk, such as by-path) and that should fix it.

On a personal note - when I misconfigured a new PVE host with a too-small root (I recommend ~40-50GB or so to accomodate web-dashboard ISO uploads) it was easier to just do a fresh install and re-specify the root disk size + redo the LVM based on what I really wanted, rather than mess around with resizing it.

Sizing disk pools is a LOT easier with ZFS, typically you give it an entire disk (or partition) and then you have all that free space to create datasets in.
 
> however it looks like because I have an LVM-thin storage device that I cant do this without first removing the LVM-storage device and then creating a new one as LVM-thin storage is not resizable?

Not exactly. You can resize-UP LVM-thin (increase free space), I tested it with webmin and a bash script and it worked OK.
Shrinking is another matter, likely have to destroy and recreate there.

https://github.com/kneutron/ansitest/blob/master/proxmox/resize-lvm-thin-pool.sh

For major LVM changes, I like to add another disk and remap everything the way it should be on that, copy everything over and shutdown + remove the original disk. Then keep the original on a shelf for a week/month and you have an emergency restore if needed while the new layout settles in.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!