Unable to expand lvm

Por12

Member
Mar 6, 2023
59
3
8
I am unable to fully use my boot drive. It is a 500G mounter on /dev/sdb with a 499G LVM mounted on /dev/sdb3.

My local partition shows 82.39G our of 103G used. I have tried extending and resizing the volume but had no success.

Any help would be appreciated.

Thanks!
 
Hello,

I assume you have used the PVE installer ISO. If you run lvs, lvdisplay or lvscan you can see all the logical volumes. You will see a data and root volume. This data volume is a lvm thin and its size cannot be decreased.
So increasing the root volume is difficult.

You could create a volume inside the lvm thin and mount it inside the root file system to get more free space:
https://forum.proxmox.com/threads/shrinking-of-lvm-thin-possible-best-workaround.71217/
 
Sorry for the long delay, got really busy at work.

I understand that fixing this w/o a reinstall is going to be hard but I'd like to understand why it happened in the first place. I have other 3 proxmox servers and in them I can see the total disk space on the root volume.

Thanks

Code:
root@zeus:~# lvs
LV   VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
data pve twi-a-tz-- 337.86g             0.00   0.50
root pve -wi-ao---- 112.00g
swap pve -wi-ao----   8.00g

Code:
root@zeus:~# lvdisplay
  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                1COqHK-kZ0Y-waVk-kZYN-FQpf-cmu5-aclloF
  LV Write Access        read/write
  LV Creation host, time proxmox, 2023-07-08 18:50:11 +0200
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                337.86 GiB
  Allocated pool data    0.00%
  Allocated metadata     0.50%
  Current LE             86493
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:4


  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                bI8ai9-fZDz-Ol09-slrM-8JQD-l5pU-xpHW7I
  LV Write Access        read/write
  LV Creation host, time proxmox, 2023-07-08 18:50:03 +0200
  LV Status              available
  # open                 2
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0


  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                33egJ4-2zRw-s6GX-U9lF-3tgB-zdJT-KtRhb2
  LV Write Access        read/write
  LV Creation host, time proxmox, 2023-07-08 18:50:03 +0200
  LV Status              available
  # open                 1
  LV Size                112.00 GiB
  Current LE             28673
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1
 
I am unable to fully use my boot drive. It is a 500G mounter on /dev/sdb with a 499G LVM mounted on /dev/sdb3.

My local partition shows 82.39G our of 103G used. I have tried extending and resizing the volume but had no success.
Can you please provide output of:
lsblk
df -h
mount
pvs
vgs

thank you


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Sure. Thanks in advance.

Code:
root@zeus:~# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
loop0                7:0    0     2G  0 loop
loop1                7:1    0     4G  0 loop
loop2                7:2    0     8G  0 loop
sda                  8:0    0   3.6T  0 disk
├─sda1               8:1    0   3.6T  0 part
└─sda9               8:9    0     8M  0 part
sdb                  8:16   0 465.8G  0 disk
├─sdb1               8:17   0  1007K  0 part
├─sdb2               8:18   0     1G  0 part /boot/efi
└─sdb3               8:19   0 464.8G  0 part
  ├─pve-swap       252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       252:1    0   112G  0 lvm  /var/hdd.log
  │                                          /
  ├─pve-data_tmeta 252:2    0   3.4G  0 lvm
  │ └─pve-data     252:4    0 337.9G  0 lvm
  └─pve-data_tdata 252:3    0 337.9G  0 lvm
    └─pve-data     252:4    0 337.9G  0 lvm
nvme0n1            259:0    0 931.5G  0 disk
├─nvme0n1p1        259:1    0 931.5G  0 part
└─nvme0n1p9        259:2    0     8M  0 part

Code:
root@zeus:~# root@zeus:~# df -h
Filesystem                        Size  Used Avail Use% Mounted on
udev                               16G     0   16G   0% /dev
tmpfs                             3.2G  1.2M  3.2G   1% /run
/dev/mapper/pve-root               96G   79G   18G  82% /
tmpfs                              16G   54M   16G   1% /dev/shm
tmpfs                             5.0M  4.0K  5.0M   1% /run/lock
efivarfs                          150K   64K   82K  44% /sys/firmware/efi/efivars
/dev/sdb2                        1022M  344K 1022M   1% /boot/efi
nvme-zeus                         851G  128K  851G   1% /nvme-zeus
nvme-zeus/cctv-clips              900G   49G  851G   6% /nvme-zeus/cctv-clips
rust-olivar                       2.0T  128K  2.0T   1% /rust-olivar
rust-olivar/subvol-301-disk-0     2.0T 1021G  980G  52% /rust-olivar/subvol-301-disk-0
rust-olivar/local-backups         2.6T  575G  2.0T  23% /rust-olivar/local-backups
log2ram                            80M   17M   64M  22% /var/log
/dev/fuse                         128M   44K  128M   1% /etc/pve
192.168.88.253:/mnt/user/backups   39T  8.8T   30T  23% /mnt/pve/backups
tmpfs                             3.2G     0  3.2G   0% /run/user/0

Code:
root@zeus:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=16332092k,nr_inodes=4083023,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=3273176k,mode=755,inode64)
/dev/mapper/pve-root on / type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=25131)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
ramfs on /run/credentials/systemd-sysusers.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
ramfs on /run/credentials/systemd-tmpfiles-setup-dev.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
/dev/sdb2 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
ramfs on /run/credentials/systemd-sysctl.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
nvme-zeus on /nvme-zeus type zfs (rw,relatime,xattr,noacl,casesensitive)
nvme-zeus/cctv-clips on /nvme-zeus/cctv-clips type zfs (rw,relatime,xattr,noacl,casesensitive)
rust-olivar on /rust-olivar type zfs (rw,relatime,xattr,noacl,casesensitive)
rust-olivar/subvol-301-disk-0 on /rust-olivar/subvol-301-disk-0 type zfs (rw,relatime,xattr,posixacl,casesensitive)
rust-olivar/local-backups on /rust-olivar/local-backups type zfs (rw,relatime,xattr,noacl,casesensitive)
/dev/mapper/pve-root on /var/hdd.log type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
log2ram on /var/log type tmpfs (rw,nosuid,nodev,noexec,noatime,size=81920k,mode=755,inode64)
ramfs on /run/credentials/systemd-tmpfiles-setup.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
tracefs on /sys/kernel/debug/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
192.168.88.253:/mnt/user/backups on /mnt/pve/backups type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.77.250,local_lock=none,addr=192.168.88.253)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=3273172k,nr_inodes=818293,mode=700,inode64)

Code:
root@zeus:~# pvs
  PV         VG  Fmt  Attr PSize    PFree
  /dev/sdb3  pve lvm2 a--  <464.76g    0

Code:
root@zeus:~# vgs
  VG  #PV #LV #SN Attr   VSize    VFree
  pve   1   3   0 wz--n- <464.76g    0
 
sdb 8:16 0 465.8G 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 1G 0 part /boot/efi
└─sdb3 8:19 0 464.8G 0 part
├─pve-swap 252:0 0 8G 0 lvm [SWAP]
├─pve-root 252:1 0 112G 0 lvm /var/hdd.log
│ /
├─pve-data_tmeta 252:2 0 3.4G 0 lvm
│ └─pve-data 252:4 0 337.9G 0 lvm
└─pve-data_tdata 252:3 0 337.9G 0 lvm
└─pve-data 252:4 0 337.9G 0 lvm
You have 465GB (500 in human terms) disk.
112GB is used for root
338GB is assigned to LVM pool
The rest is spread of various system allocations.
You have no free space on the disk to expand the root volume.

One of your choices is to delete the LVM thin pool, expand the root and then re-create the LVM thin pool.
However, if you have any data (VM disk images) there, you need to move or back them up first (or both).

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
That makes perfect sense. The LVM-Thin is completely empty (0B used). Is this what I should do? Just wanted to double-check.

- delete data from storage in gui
- lvremove /dev/pve/data
- resize2fs /dev/mapper/pve-root

Thanks!
 
There are a few other steps in between, ie before resizing filesystem you need to expand the pve-root LVM volume. Dont forget to remove Proxmox storage pool that refers to LVM thin.
I recommend finding a step by step guide on the internet, as well as installing a nested PVE and trying it there first.


Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Thanks! Just for reference, I'll leave my steps:

1. Ensure there is no data on /dev/pve/data as this WILL cause data loss
2. Delete pve-data logical volume:

lvremove /dev/pve/data -y

3. Create it with your desired size (e.g. 100G)

lvcreate -L 100G -n data pve -T

4. Resize pve-root with 100% of free size

lvresize -l +100%FREE /dev/pve/root

5. Resize pve-root file system:

resize2fs /dev/mapper/pve-root (for zfs)
xfs_growfs /dev/mapper/pve-root (for xfs)

(You can find out what filesystem you're using with: mount | grep pve-root)

Thanks!

As bbgeek17 says, better to test it beforehand on a nested pve if you have something important on the server.

Edit: solved but I can't find how to mark it.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!