Seeking less disruptive method for LVM Root Filesystem Shrink on Proxmox VE 8.x

Finnedsgang

New Member
Jun 19, 2025
3
1
3
Hello Proxmox Community,

I'm reaching out for advice on an LVM disk space reallocation issue. I'm trying to optimize my disk usage on my Proxmox host and avoid using a Live USB for a specific task if possible.

My Storage Journey and Current Goal:I initially set up Proxmox on a small 60GB NVMe drive (a 2230 model, salvaged from a Steam Deck!). I quickly realized this was insufficient for my plans, even with additional large HDDs (3TB and 14TB) mounted as storage. To gain more flexibility for VMs and containers, I recently cloned my entire Proxmox OS to a larger 512GB NVMe drive.

Now, with the larger NVMe, I want to fully utilize this space for new containers (like Batocera and Home Assistant) and existing VMs. My current goal is to shrink my pve-root Logical Volume (which hosts /) from its current excessive size down to a more reasonable 50GB. The substantial amount of space freed (around 400GB) will then be used to extend my pve-data Logical Volume (which backs the local-lvm storage for VMs/CTs). I aim to add approximately 380GB to pve-data to provide ample fast storage for my virtual machines.

My System Specifications:

  • Proxmox VE Version: PVE 8.x (kernel 6.8.12-9-pve)
  • CPU: Intel Core i3-6100
  • RAM: 16GB
  • Storage: 512GB NVMe (Proxmox OS on /dev/nvme0n1p3, which is the PV for pve VG), 3TB HDD (/mnt/pve/RED3TB), 14TB HDD (/mnt/pve/Toshiba14TB).
  • All VMs/CTs are currently shut down, and full backups have been successfully created on my 14TB disk for safety.
Current LVM Status (after cloning and initial NVMe expansion):Here are the relevant outputs from my Proxmox host:

Immagine 2025-06-28 181904.jpg


As you can see, pve-root is vastly oversized (444GB with only 1% used), but VFree in the pve Volume Group is 0. This indicates that all space is currently allocated to existing Logical Volumes (primarily pve-root and pve-data). My pve-data (local-lvm) only has about 3.7GB free (16.36GB * 0.2267).

The Problems Encountered (Online Shrinking & GParted Limitation):

  1. Online resize2fs failure: When attempting to shrink the ext4 filesystem on pve-root while Proxmox is running:

    Code:
    Bash
    
    resize2fs /dev/mapper/pve-root 50G
    I receive the error:

    Code:
    resize2fs: On-line shrinking not supported

    This confirms that shrinking an ext4 filesystem (especially the root filesystem /) requires it to be unmounted, meaning the system cannot be running from it.
  2. GParted Live USB limitation: I then tried booting from a GParted Live USB. While GParted correctly identified /dev/nvme0n1 and its LVM Physical Volume nvme0n1p3, it did not expose the individual Logical Volumes (like pve-root and pve-data) for direct graphical manipulation/resizing within its interface. It only showed the lvm2 pv pve container, not its contents. This means GParted cannot be used to perform the necessary resize2fs on pve-root and subsequent lvreduce/lvextend operations on the LVs right? .
My Current Plan (Offline Solution - and why I'm seeking alternatives):Given the above, my current plan involves using a different Live USB (like Ubuntu Live, which provides a terminal for LVM commands). The process would be to:

  1. Shut down the Proxmox server.
  2. Boot from the Ubuntu Live USB.
  3. Manually activate the LVM Volume Group.
  4. Unmount /dev/mapper/pve-root.
  5. Perform the resize2fs operation offline to shrink the filesystem to 50GB.
  6. Then, use lvreduce on pve-root and lvextend on pve-data.
While this offline method is technically feasible, I don't know if it's the only and the correct one

My Question to the Community:
Given that resize2fs doesn't support online shrinking for ext4 and GParted couldn't directly manipulate the LVs, are there any alternative methods to shrink the pve-root filesystem (and then the LV) online or with minimal/no interruption beyond a simple reboot (i.e., avoiding booting from a Live USB)? I'm looking for the least disruptive way to achieve this LVM reallocation.

p.s. i don't have downtime issues, but I would like to avoid to destroy everything I've done :D

Any advice or alternative strategies would be greatly appreciated! Thank you.
 
Last edited:
Webmin (runs on port 10000) on a portable-pve external SSD has a web GUI + LVM module, but you might be better off with WeLees

https://www.welees.com/lvm-support.html

Create a "portable pve" by installing to e.g. 64GB SD card (I used zfs boot/root for mine, but not strictly necessary - and probably not ideal if you already have zfs boot/root on internal storage) - boot from that, then install WeLees and use it for LVM manipulation. This way you have everything you need for LVM and zfs compatibility, and the rootfs for your main environment won't be mounted.

https://www.amazon.com/dp/B07N192W13?ref_=ppx_hzsearch_conn_dt_b_fed_asin_title_1

Edit - MAKE A BACKUP of everything before you attempt lvm resize!! All your LXC/VMs to separate Proxmox Backup Server, and all critical files.

https://github.com/kneutron/ansitest/tree/master/proxmox

Look into the bkpcrit script, point it to separate disk / NAS, run nightly in cron

You can image ext4 rootfs by using my custom script, or try REAR (Relax And Recover)
 
Last edited:
alternative strategies
Back up the guests and important system configs (/etc, /var/lib/pve-cluster, etc) to another disk and re-install with ZFS where you don't have to allocate volumes for specific things. There local and local-zfs have access to all the physical disk's space.
Some people try to work around your issue by deleting local-lvm and instead only use local with file based disks but that comes with its own issues such as slowness and in the case of CTs lack of snapshots as they don't support QCOW2.
 
Last edited:
  • Like
Reactions: Kingneutron