HowTo resize pve-root

ZzenlD

New Member
Sep 8, 2025
4
0
1
Hello guys,
I installed Proxmox on a 2TB NVMe SSD and created several VMs.

Since the root directory is very full, I now need to expand it. However, my problem is that the data-lvm (where the VM data is located) has allocated the entire remaining hard drive:

Code:
root@proxmox:/home/user# lsblk
NAME                              MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
nvme1n1                           259:0    0   1.8T  0 disk
├─nvme1n1p1                       259:1    0  1007K  0 part
├─nvme1n1p2                       259:2    0     1G  0 part /boot/efi
└─nvme1n1p3                       259:3    0   1.8T  0 part
  ├─pve-swap                      252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                      252:1    0 100.9G  0 lvm  /
  ├─pve-data_tmeta                252:2    0  15.9G  0 lvm 
  │ └─pve-data-tpool              252:4    0   1.7T  0 lvm 
  │   ├─pve-data                  252:5    0   1.7T  1 lvm 
  │   ├─pve-vm--2001--disk--0     252:6    0    35G  0 lvm 
  │   ├─pve-vm--2001--disk--1     252:7    0     4M  0 lvm 
  │   ├─pve-vm--2003--cloudinit   252:8    0     4M  0 lvm 
  │   ├─pve-vm--2003--disk--0     252:9    0    24G  0 lvm 
  │   ├─pve-vm--2003--disk--1     252:10   0     4M  0 lvm 
  │   ├─pve-vm--40002--cloudinit  252:11   0     4M  0 lvm 
  │   ├─pve-vm--40002--disk--0    252:12   0    24G  0 lvm 
  │   ├─pve-vm--40002--disk--1    252:13   0     4M  0 lvm 
  │   ├─pve-vm--100010--cloudinit 252:14   0     4M  0 lvm 
  │   ├─pve-vm--100010--disk--0   252:15   0    24G  0 lvm 
  │   └─pve-vm--100010--disk--1   252:16   0     4M  0 lvm 
  └─pve-data_tdata                252:3    0   1.7T  0 lvm 
    └─pve-data-tpool              252:4    0   1.7T  0 lvm 
      ├─pve-data                  252:5    0   1.7T  1 lvm 
      ├─pve-vm--2001--disk--0     252:6    0    35G  0 lvm 
      ├─pve-vm--2001--disk--1     252:7    0     4M  0 lvm 
      ├─pve-vm--2003--cloudinit   252:8    0     4M  0 lvm 
      ├─pve-vm--2003--disk--0     252:9    0    24G  0 lvm 
      ├─pve-vm--2003--disk--1     252:10   0     4M  0 lvm 
      ├─pve-vm--40002--cloudinit  252:11   0     4M  0 lvm 
      ├─pve-vm--40002--disk--0    252:12   0    24G  0 lvm 
      ├─pve-vm--40002--disk--1    252:13   0     4M  0 lvm 
      ├─pve-vm--100010--cloudinit 252:14   0     4M  0 lvm 
      ├─pve-vm--100010--disk--0   252:15   0    24G  0 lvm 
      └─pve-vm--100010--disk--1   252:16   0     4M  0 lvm

Can anyone tell me how I can reduce the pve-data by, for example, 500GB and assign these 500GB to the pve-root?

I would be very grateful.
 
how I can reduce the pve-data
Hello, ZzenlD.
AFAIK, this is impossible. Because pve-data is a thin pool and thin pools can't be shrinked or this is very difficult (at least to a quick research in this forum and in Google; if someone knows better, please correct me).

So just delete the thin pool, extend pve-root, create the thin pool (lesser than previously) and recover the VMs from a backup.
You have backups, anyway, have you? :) On the separate disks, not only on the main disk in this host.

Of course make some backup of the pve-root before modyfing the layout.

See for instance: https://forum.proxmox.com/threads/resizing-pve-data.30506/
edit: and https://forum.proxmox.com/threads/reduce-size-of-local-lvm.78676/
 
Last edited:
  • Like
Reactions: Kingneutron
Or, maybe you could tell us why pve-root is full. 100GB is a lot for a PVE root filesystem. If it is because of backups you're doing it wrong storing them on the same physical device. If it is something else, an option would be to create a volume in the thin area and mount it someplace and move the stuff to there.
 
Yes, I store a local backup on it, which is then synchronized daily to a NAS.
The reason for this is that it allows me to quickly restore VMs if they come from the local hard drive.

I know that a backup on the local drive is not considered a backup.

I like the idea of creating a “volume” in the thin area and mounting it to /srv/backup, for example. Does anyone have a link/tip on how I could do this?

Thanks for your help so far, you guys are great :)
 
I guess your volume group is "pve" and the thin pool is "data", so let's create a volume called "backups" and mount it. You'll want to verify that my assumptions and syntax are correct...

Code:
lvcreate -V 100G --thinpool pve/data -n backups
mkfs -t ext4 /dev/mapper/data-backups
mount /dev/mapper/data-backups /mnt/backups

Add to /etc/fstab to make it permanent. Then to get rid of it:

Code:
umount /mnt/backups
lvremove data/backups

See "man lvcreate" for more information about creating logical volumes.
 
If I understand correctly, the root volume is also just a volume called "root" on the volume group "pve", right?
Code:
root@proxmox:/home/user# lvdisplay
[...]
--- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                qbYamk-hQ7U-U5iA-38qU-qfDh-eu1n-Y4RbHn
  LV Write Access        read/write
  LV Creation host, time proxmox, 2025-08-30 13:36:14 +0200
  LV Status              available
  # open                 1
  LV Size                <100.88 GiB
  Current LE             25824
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1
[...]

So if I now create a backup volume with the above command, is that analogous to the root volume, or am I wrong?

Thanks for the valuable tips. I'm learning a lot.
 
That's right. The difference is that the root volume is "thick provisioned", it takes up the full amount of space that is allocated and you can't allocate more than you have available. It is a fixed allocation.

The volumes under pve/data are thin, meaning they only take up the space that's used up to the maximum that was specified. Which is more flexible, but you also have to keep an eye on the used space because it is possible to assign more than you have.

IOW, this new volume would share space with the VM disks while the root volume has its own allocation.
 
No I mean that discard has to be enabled specifically. That setting is not enabled by default when creating virtual disks/guests.
 
  • Like
Reactions: Kingneutron