Disks size / LVM mismatch


New Member
Jan 15, 2021

Today I used CloneZilla to image my original 256GB NVMe disk onto a new 1 TB NVMe disk, with the option to resize the partitions proportionally. It looks like CloneZilla did what it is supposed to, the system boots up and if I go into the Disks view within Proxmox I see it now says 931.5 GB.


If I go to LVM under disks though I still only see the original 237.97 GB.


If I try to create a new LVM (just to see if it shows available space it doesnt show any as available.

Is there a way for me to extend the LVM so I can actually use all of the space? Any help would be appreciated, hate to have a 1 TB disk that I cant fully use.

Here is my partition table as well:

fdisk -l | grep ^/dev
Partition 2 does not start on physical sector boundary.
/dev/nvme0n1p1      34       7900       7867  3.9M BIOS boot
/dev/nvme0n1p2    7901    1056476    1048576  512M EFI System
/dev/nvme0n1p3 1056477 1953525134 1952468658  931G Linux LVM

Here is my version table:

proxmox-ve: 6.3-1 (running kernel: 5.4.106-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-8
pve-kernel-helper: 6.3-8
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.8
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-9
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.1-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.5-1
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-5
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-10
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1
Last edited:
You can resize LVs with the command lvresize.
If it's a thin pool don't forget to resize the meta data lv as well.
So I was able to get the pve volume to see the entire disk now by running pvresize /dev/nvme0n1p3 and it shows free space now.

The question now though is how do I change the space on the LV's? For example, here is my table from running lvs

  LV                              VG       Attr       LSize    Pool     Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  data                            pve      twi-aotz-- <151.63g                        30.48  2.90
  root                            pve      -wi-ao----   59.25g
  snap_vm-100-disk-0_Base_Setup   pve      Vri---tz-k   60.00g data     vm-100-disk-0
  swap                            pve      -wi-ao----    8.00g
  vm-100-disk-0                   pve      Vwi-a-tz--   60.00g data                   57.94
  vm-100-state-Base_Setup         pve      Vwi-a-tz--  <64.49g data                   3.99

I really dont know much about these to understand the best way to do it without breaking my system. So with that being said, how could I increase the root to 100 GB and use the rest of the free space for data (not sure if I need to bump up swap)??
I know this is 2 years old, but just for reference, in case others like me stumble on this while Google searching

I was in a similar situation as OP (cloned my PVE into a larger drive but I'm absolutely not a Linux expert and I couldn't find any explanations for my specific situation (most post about this simply tell you to do a full disk wipe and reinstall), so here's the solution if you don't want to (or can't) wipe and reinstall ;

When the disk size shown in df -h is not the same on the GUI and lvs command, do the following ;

resize2fs -l /dev/mapper/pve-root *NEWSIZE*

so for OP, it would be ;
resize2fs -l /dev/mapper/pve-root 931G

Then you can extend the drive with lvextend ;

lvextend -f +100%FREE /dev/mapper/pve-root

run a df -h to confirm


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!