Resize local or local-lvm (and how?)

thimplicity

Member
Feb 4, 2022
73
9
8
44
Hi everyone,
I installed proxmox (7.1-10) on a 250GB SSD and maybe as a beginner error, I did not extend local or local-lvm to be able to use the whole disk. I would like to use the remaining space (local is 16gb and local-lvm is 30gb) asl well for VMs, containers etc. Two questions:
  1. Which one should I extend? I assume local-lvm?
  2. How do I do that?
Thanks in advance!
 
"local-lvm" can only store VMs/LXCs. "local" can only store ISOs/Backups/Snippets/Templates as well as your root-filesystem of the PVE OS and all other files. So it really depends on what you want and how much of it you want to store on that disk. 16GB just for the PVE OS would be fine but in case you also want to store ISOs, templates and backups there too you might want more space. Its normal LVM/LVM-Thin so you could search for a Debian tutorial that you like which will explain you how to extend your VGs and LVs.
 
Last edited:
  • Like
Reactions: LordVader
"local-lvm" can only store VMs/LXCs. "local" can only store ISOs/Backups/Snippets/Templates as well as your root-filesystem of the PVE OS and all other files. So it really depends on what you want and how much of it you want to store on that disk. 16GB just for the PVE OS would be fine but in case you also want to store ISOs, templates and backups there too you might want more space. Its normal LVM/LVM-Thin so you could search for a Debian tutorial that you like which will explain you how to extend your VGs and LVs.
Local-lvm it is then! Thanks a lot, will check on the the tutorial online
 
Here are a few commands to check the current configuration in host shell.
Code:
pvs
vgs
lvs
lsblk

First make sure your volume group (VG) called pve takes up almost the entire partition.

After that extending local-lvm is quite simple. Just make sure you don't forget to also extend metadata.
Code:
lvextend -L+100G pve/data
lvresize --poolmetadatasize +1GB pve/data
 
OK, I am a little confused! I am new to proxmox and my assumption is that I need to extend the partition, then the volume group and then the volume, correct?

Here is the output from the commands above mentioned by @jaegerschnitzel:

pvs:
Code:
root@pve:~# pvs
  PV         VG  Fmt  Attr PSize   PFree
  /dev/sda3  pve lvm2 a--  <59.50g <7.38g

vgs:
Code:
root@pve:~# vgs
  VG  #PV #LV #SN Attr   VSize   VFree
  pve   1   3   0 wz--n- <59.50g <7.38g

lvs:
Code:
root@pve:~# lvs
  LV   VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data pve twi-aotz-- <28.00g             0.00   1.58                           
  root pve -wi-ao----  14.75g                                                   
  swap pve -wi-ao----  <7.38g

lsblk:
Code:
NAME                 MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                    8:0    0 232.9G  0 disk
├─sda1                 8:1    0  1007K  0 part
├─sda2                 8:2    0   512M  0 part /boot/efi
└─sda3                 8:3    0  59.5G  0 part
  ├─pve-swap         253:0    0   7.4G  0 lvm  [SWAP]
  ├─pve-root         253:1    0  14.8G  0 lvm  /
  ├─pve-data_tmeta   253:2    0     1G  0 lvm 
  │ └─pve-data-tpool 253:4    0    28G  0 lvm 
  │   └─pve-data     253:5    0    28G  1 lvm 
  └─pve-data_tdata   253:3    0    28G  0 lvm 
    └─pve-data-tpool 253:4    0    28G  0 lvm 
      └─pve-data     253:5    0    28G  1 lvm

and the relevant portion of fdisk /dev/sda - p:
Code:
Disk /dev/sda: 232.89 GiB, 250059350016 bytes, 488397168 sectors
Disk model: PNY CS900 250GB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: CCFC0347-66A1-46CA-95C6-8EFA426090A7

Device       Start       End   Sectors  Size Type
/dev/sda1       34      2047      2014 1007K BIOS boot
/dev/sda2     2048   1050623   1048576  512M EFI System
/dev/sda3  1050624 125829120 124778497 59.5G Linux LVM

Would it now be correct to follow the approach outlined here?
https://help.univention.com/t/how-to-extend-disk-space/10647

Can I also use 'lvresize -l +100%FREE' or do I need to take 'lvresize --poolmetadatasize +1GB pve/data' into account?
 
OK, I am a little confused! I am new to proxmox and my assumption is that I need to extend the partition, then the volume group and then the volume, correct?

Would it now be correct to follow the approach outlined here?
https://help.univention.com/t/how-to-extend-disk-space/10647

Can I also use 'lvresize -l +100%FREE' or do I need to take 'lvresize --poolmetadatasize +1GB pve/data' into account?
Yes thats correct. I assumed that your partition has already the correct size.

You can follow the approach but keep in mind that they delete the LVM partition. If you do that all your data will be lost if you don't have a backup.

You can first extend your metadata and then use
Code:
lvresize -l +100%FREE pve/data
 
Yes thats correct. I assumed that your partition has already the correct size.

You can follow the approach but keep in mind that they delete the LVM partition. If you do that all your data will be lost if you don't have a backup.

You can first extend your metadata and then use
Code:
lvresize -l +100%FREE pve/data
Thanks!
I do not have any data on /dev/sda3 as of now. Deleting the partition will only delete the data there, correct? Not on the other paritions?
 
Also keep in mind that the PVE installer by default won't use 100% of the VGs capacity for LVs. If your LVs use up all the space then there is no free space if you ever need to store a LVM snapshot.
 
I ha
Also keep in mind that the PVE installer by default won't use 100% of the VGs capacity for LVs. If your LVs use up all the space then there is no free space if you ever need to store a LVM snapshot.
I have not thought about backup and snapshots so far - so maybe I should do that first :D
 
Hi everyone,
I finally had time to come back to my initial question. I store proxmox backups on my TrueNAS, which is running in a VM on my proxmox server. This also means that I cannot backup the TrueNAS VM on TrueNAS itself.

So I would like to split up the proxmox SSD into

local: 150GB
local-lvm: 100GB

What would be the best way to approach this? I am rather new to Lunix and Proxmox. I guess I „just“ need to extend both partitions and volumes?
 
I'm actually interested in decreasing the size of my local-lvm.. Upon install on my 1 TB nvme drive, the entire disk is taken up... The local-lvm parition ofc being the biggest... I have enough free space remaining taking 200Gigs off of it. I already bit myself trying it first.. I resized the partiitions with cfdisk only to find out that proxmox didn't like that very much.. I've been spending the last 2 hours trying to make things right... All my vm's were inaccecible.

I did run

lvm pvresize but the local-lvm disk didn't have a path anymore so it didn't work..

In the end.. surprisingly enough I deleted the 2 newly smaller partitions I had created off of the free space and resized the lvm disk again to its original size... Proxmox did like that and the disk is working again with my vm's back in play.

But oposite to extending the disk, what would be the proper way to shrink the disk?
 
you should take a look at network chuck proxmox installation video on youtube. he goes through the process of resizing local on your proxmox install after deleting local lvm thin.
 
you should take a look at network chuck proxmox installation video on youtube. he goes through the process of resizing local on your proxmox install after deleting local lvm thin.


Yeh I watched that video a few days ago... He talks about increasing storage from what I believe... Well it's ok.. I'll do a good backup this time and see what happens...
 
I'm actually interested in decreasing the size of my local-lvm.. Upon install on my 1 TB nvme drive, the entire disk is taken up... The local-lvm parition ofc being the biggest... I have enough free space remaining taking 200Gigs off of it. I already bit myself trying it first.. I resized the partiitions with cfdisk only to find out that proxmox didn't like that very much.. I've been spending the last 2 hours trying to make things right... All my vm's were inaccecible.

I did run

lvm pvresize but the local-lvm disk didn't have a path anymore so it didn't work..

In the end.. surprisingly enough I deleted the 2 newly smaller partitions I had created off of the free space and resized the lvm disk again to its original size... Proxmox did like that and the disk is working again with my vm's back in play.

But oposite to extending the disk, what would be the proper way to shrink the disk?

You have to re-create the pool and here is how to do it: https://forum.proxmox.com/threads/reduce-size-of-local-lvm.78676/#post-348810

I had the same problem, worked like charm. Last step takes a while, though.

For completeness, here is how to extend it as well: https://forum.proxmox.com/threads/how-to-extend-lvm-thin-pool.54900/#post-254074

No backup required for extending.
 
Last edited:
  • Like
Reactions: trikster
Hi,
sorry for hijacking this topic - but I also need to resize my local-lvm disk size, so I thought it might be better to use an existing topic rather than creating a new one :)

Right now, my "local" storage is using nearly 20% from the available space (60Gb) - and I don't need much more space... So I would like to shrink this to 20 GB and increase the local-lvm storage with the additional 40 GB
I followed some of the steps above...

~ pvs
Bash:
root@homeserver:~# pvs
  PV             VG  Fmt  Attr PSize    PFree 
  /dev/nvme0n1p3 pve lvm2 a--  <223.07g <16.00g

~ vgs
Bash:
root@homeserver:~# vgs
  VG  #PV #LV #SN Attr   VSize    VFree  
  pve   1  19   0 wz--n- <223.07g <16.00g

~ lvs
Bash:
root@homeserver:~# lvs
  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- 140.45g             88.41  4.19                            
  root          pve -wi-ao----  55.75g                                                    
  swap          pve -wi-ao----   8.00g                                                    
  vm-100-disk-2 pve Vwi-aotz--   8.00g data        83.16                                  
  vm-101-disk-0 pve Vwi-aotz--   4.00g data        31.21                                  
  vm-102-disk-0 pve Vwi-aotz--   2.00g data        63.92                                  
  vm-103-disk-0 pve Vwi-aotz--  12.00g data        95.54                                  
  vm-104-disk-0 pve Vwi-a-tz--   8.00g data        17.25                                  
  vm-105-disk-0 pve Vwi-aotz--   8.00g data        56.74                                  
  vm-106-disk-1 pve Vwi-aotz--  41.00g data        94.12                                  
  vm-107-disk-0 pve Vwi-aotz--  10.00g data        46.95                                  
  vm-108-disk-0 pve Vwi-aotz--   4.00g data        99.67                                  
  vm-109-disk-0 pve Vwi-aotz--  64.00g data        6.76                                   
  vm-110-disk-0 pve Vwi-aotz--   8.00g data        48.05                                  
  vm-111-disk-0 pve Vwi-aotz--   8.00g data        53.40                                  
  vm-112-disk-0 pve Vwi-aotz--   4.00g data        71.18                                  
  vm-113-disk-0 pve Vwi-aotz--   4.00m data        0.00                                   
  vm-113-disk-1 pve Vwi-aotz--  32.00g data        59.66                                  
  vm-121-disk-0 pve Vwi-aotz--  16.00g data        99.80

~ lsblk
Bash:
root@homeserver:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0   2.7T  0 disk 
├─sda1                         8:1    0   2.7T  0 part 
└─sda9                         8:9    0     8M  0 part 
sdb                            8:16   0   2.7T  0 disk 
├─sdb1                         8:17   0   2.7T  0 part 
└─sdb9                         8:25   0     8M  0 part 
sdc                            8:32   0   7.3T  0 disk 
├─sdc1                         8:33   0   7.3T  0 part 
└─sdc9                         8:41   0     8M  0 part 
sdd                            8:48   0   7.3T  0 disk 
├─sdd1                         8:49   0   7.3T  0 part 
└─sdd9                         8:57   0     8M  0 part 
sde                            8:64   0   7.3T  0 disk 
├─sde1                         8:65   0   7.3T  0 part 
└─sde9                         8:73   0     8M  0 part 
sdf                            8:80   0   7.3T  0 disk 
├─sdf1                         8:81   0   7.3T  0 part 
└─sdf9                         8:89   0     8M  0 part 
sdg                            8:96   0   2.7T  0 disk 
└─sdg1                         8:97   0   2.7T  0 part /mnt/usb-drive
nvme0n1                      259:0    0 223.6G  0 disk 
├─nvme0n1p1                  259:1    0  1007K  0 part 
├─nvme0n1p2                  259:2    0   512M  0 part /boot/efi
└─nvme0n1p3                  259:3    0 223.1G  0 part 
  ├─pve-swap                 253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0  55.8G  0 lvm  /
  ├─pve-data_tmeta           253:2    0   1.4G  0 lvm  
  │ └─pve-data-tpool         253:4    0 140.5G  0 lvm  
  │   ├─pve-data             253:5    0 140.5G  1 lvm  
  │   ├─pve-vm--103--disk--0 253:7    0    12G  0 lvm  
  │   ├─pve-vm--105--disk--0 253:8    0     8G  0 lvm  
  │   ├─pve-vm--100--disk--2 253:10   0     8G  0 lvm  
  │   ├─pve-vm--107--disk--0 253:11   0    10G  0 lvm  
  │   ├─pve-vm--121--disk--0 253:12   0    16G  0 lvm  
  │   ├─pve-vm--106--disk--1 253:13   0    41G  0 lvm  
  │   ├─pve-vm--111--disk--0 253:14   0     8G  0 lvm  
  │   ├─pve-vm--112--disk--0 253:15   0     4G  0 lvm  
  │   ├─pve-vm--113--disk--0 253:16   0     4M  0 lvm  
  │   ├─pve-vm--113--disk--1 253:17   0    32G  0 lvm  
  │   ├─pve-vm--109--disk--0 253:18   0    64G  0 lvm  
  │   ├─pve-vm--102--disk--0 253:19   0     2G  0 lvm  
  │   ├─pve-vm--104--disk--0 253:20   0     8G  0 lvm  
  │   ├─pve-vm--101--disk--0 253:21   0     4G  0 lvm  
  │   ├─pve-vm--108--disk--0 253:22   0     4G  0 lvm  
  │   └─pve-vm--110--disk--0 253:23   0     8G  0 lvm  
  └─pve-data_tdata           253:3    0 140.5G  0 lvm  
    └─pve-data-tpool         253:4    0 140.5G  0 lvm  
      ├─pve-data             253:5    0 140.5G  1 lvm  
      ├─pve-vm--103--disk--0 253:7    0    12G  0 lvm  
      ├─pve-vm--105--disk--0 253:8    0     8G  0 lvm  
      ├─pve-vm--100--disk--2 253:10   0     8G  0 lvm  
      ├─pve-vm--107--disk--0 253:11   0    10G  0 lvm  
      ├─pve-vm--121--disk--0 253:12   0    16G  0 lvm  
      ├─pve-vm--106--disk--1 253:13   0    41G  0 lvm  
      ├─pve-vm--111--disk--0 253:14   0     8G  0 lvm  
      ├─pve-vm--112--disk--0 253:15   0     4G  0 lvm  
      ├─pve-vm--113--disk--0 253:16   0     4M  0 lvm  
      ├─pve-vm--113--disk--1 253:17   0    32G  0 lvm  
      ├─pve-vm--109--disk--0 253:18   0    64G  0 lvm  
      ├─pve-vm--102--disk--0 253:19   0     2G  0 lvm  
      ├─pve-vm--104--disk--0 253:20   0     8G  0 lvm  
      ├─pve-vm--101--disk--0 253:21   0     4G  0 lvm  
      ├─pve-vm--108--disk--0 253:22   0     4G  0 lvm  
      └─pve-vm--110--disk--0 253:23   0     8G  0 lvm

In addition to that, I do have one VM in particular, that has a Disc Size of 64G assigned (vm 109) - but does not use that amount of space at all.
I would like to shrink that disk... if possible.
I already read that it can cause issues but I would like to give it a try... It wouldn't be hard to reinstall that particular VM - and - restore it's system from a backup...
 
Hi,
sorry for hijacking this topic - but I also need to resize my local-lvm disk size, so I thought it might be better to use an existing topic rather than creating a new one :)

Right now, my "local" storage is using nearly 20% from the available space (60Gb) - and I don't need much more space... So I would like to shrink this to 20 GB and increase the local-lvm storage with the additional 40 GB
I followed some of the steps above...

~ pvs
Bash:
root@homeserver:~# pvs
  PV             VG  Fmt  Attr PSize    PFree
  /dev/nvme0n1p3 pve lvm2 a--  <223.07g <16.00g

~ vgs
Bash:
root@homeserver:~# vgs
  VG  #PV #LV #SN Attr   VSize    VFree 
  pve   1  19   0 wz--n- <223.07g <16.00g

~ lvs
Bash:
root@homeserver:~# lvs
  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- 140.45g             88.41  4.19                           
  root          pve -wi-ao----  55.75g                                                   
  swap          pve -wi-ao----   8.00g                                                   
  vm-100-disk-2 pve Vwi-aotz--   8.00g data        83.16                                 
  vm-101-disk-0 pve Vwi-aotz--   4.00g data        31.21                                 
  vm-102-disk-0 pve Vwi-aotz--   2.00g data        63.92                                 
  vm-103-disk-0 pve Vwi-aotz--  12.00g data        95.54                                 
  vm-104-disk-0 pve Vwi-a-tz--   8.00g data        17.25                                 
  vm-105-disk-0 pve Vwi-aotz--   8.00g data        56.74                                 
  vm-106-disk-1 pve Vwi-aotz--  41.00g data        94.12                                 
  vm-107-disk-0 pve Vwi-aotz--  10.00g data        46.95                                 
  vm-108-disk-0 pve Vwi-aotz--   4.00g data        99.67                                 
  vm-109-disk-0 pve Vwi-aotz--  64.00g data        6.76                                  
  vm-110-disk-0 pve Vwi-aotz--   8.00g data        48.05                                 
  vm-111-disk-0 pve Vwi-aotz--   8.00g data        53.40                                 
  vm-112-disk-0 pve Vwi-aotz--   4.00g data        71.18                                 
  vm-113-disk-0 pve Vwi-aotz--   4.00m data        0.00                                  
  vm-113-disk-1 pve Vwi-aotz--  32.00g data        59.66                                 
  vm-121-disk-0 pve Vwi-aotz--  16.00g data        99.80

~ lsblk
Bash:
root@homeserver:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0   2.7T  0 disk
├─sda1                         8:1    0   2.7T  0 part
└─sda9                         8:9    0     8M  0 part
sdb                            8:16   0   2.7T  0 disk
├─sdb1                         8:17   0   2.7T  0 part
└─sdb9                         8:25   0     8M  0 part
sdc                            8:32   0   7.3T  0 disk
├─sdc1                         8:33   0   7.3T  0 part
└─sdc9                         8:41   0     8M  0 part
sdd                            8:48   0   7.3T  0 disk
├─sdd1                         8:49   0   7.3T  0 part
└─sdd9                         8:57   0     8M  0 part
sde                            8:64   0   7.3T  0 disk
├─sde1                         8:65   0   7.3T  0 part
└─sde9                         8:73   0     8M  0 part
sdf                            8:80   0   7.3T  0 disk
├─sdf1                         8:81   0   7.3T  0 part
└─sdf9                         8:89   0     8M  0 part
sdg                            8:96   0   2.7T  0 disk
└─sdg1                         8:97   0   2.7T  0 part /mnt/usb-drive
nvme0n1                      259:0    0 223.6G  0 disk
├─nvme0n1p1                  259:1    0  1007K  0 part
├─nvme0n1p2                  259:2    0   512M  0 part /boot/efi
└─nvme0n1p3                  259:3    0 223.1G  0 part
  ├─pve-swap                 253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0  55.8G  0 lvm  /
  ├─pve-data_tmeta           253:2    0   1.4G  0 lvm 
  │ └─pve-data-tpool         253:4    0 140.5G  0 lvm 
  │   ├─pve-data             253:5    0 140.5G  1 lvm 
  │   ├─pve-vm--103--disk--0 253:7    0    12G  0 lvm 
  │   ├─pve-vm--105--disk--0 253:8    0     8G  0 lvm 
  │   ├─pve-vm--100--disk--2 253:10   0     8G  0 lvm 
  │   ├─pve-vm--107--disk--0 253:11   0    10G  0 lvm 
  │   ├─pve-vm--121--disk--0 253:12   0    16G  0 lvm 
  │   ├─pve-vm--106--disk--1 253:13   0    41G  0 lvm 
  │   ├─pve-vm--111--disk--0 253:14   0     8G  0 lvm 
  │   ├─pve-vm--112--disk--0 253:15   0     4G  0 lvm 
  │   ├─pve-vm--113--disk--0 253:16   0     4M  0 lvm 
  │   ├─pve-vm--113--disk--1 253:17   0    32G  0 lvm 
  │   ├─pve-vm--109--disk--0 253:18   0    64G  0 lvm 
  │   ├─pve-vm--102--disk--0 253:19   0     2G  0 lvm 
  │   ├─pve-vm--104--disk--0 253:20   0     8G  0 lvm 
  │   ├─pve-vm--101--disk--0 253:21   0     4G  0 lvm 
  │   ├─pve-vm--108--disk--0 253:22   0     4G  0 lvm 
  │   └─pve-vm--110--disk--0 253:23   0     8G  0 lvm 
  └─pve-data_tdata           253:3    0 140.5G  0 lvm 
    └─pve-data-tpool         253:4    0 140.5G  0 lvm 
      ├─pve-data             253:5    0 140.5G  1 lvm 
      ├─pve-vm--103--disk--0 253:7    0    12G  0 lvm 
      ├─pve-vm--105--disk--0 253:8    0     8G  0 lvm 
      ├─pve-vm--100--disk--2 253:10   0     8G  0 lvm 
      ├─pve-vm--107--disk--0 253:11   0    10G  0 lvm 
      ├─pve-vm--121--disk--0 253:12   0    16G  0 lvm 
      ├─pve-vm--106--disk--1 253:13   0    41G  0 lvm 
      ├─pve-vm--111--disk--0 253:14   0     8G  0 lvm 
      ├─pve-vm--112--disk--0 253:15   0     4G  0 lvm 
      ├─pve-vm--113--disk--0 253:16   0     4M  0 lvm 
      ├─pve-vm--113--disk--1 253:17   0    32G  0 lvm 
      ├─pve-vm--109--disk--0 253:18   0    64G  0 lvm 
      ├─pve-vm--102--disk--0 253:19   0     2G  0 lvm 
      ├─pve-vm--104--disk--0 253:20   0     8G  0 lvm 
      ├─pve-vm--101--disk--0 253:21   0     4G  0 lvm 
      ├─pve-vm--108--disk--0 253:22   0     4G  0 lvm 
      └─pve-vm--110--disk--0 253:23   0     8G  0 lvm

In addition to that, I do have one VM in particular, that has a Disc Size of 64G assigned (vm 109) - but does not use that amount of space at all.
I would like to shrink that disk... if possible.
I already read that it can cause issues but I would like to give it a try... It wouldn't be hard to reinstall that particular VM - and - restore it's system from a backup...
I'm on the same boat! Did you have any progress on this? I'm pretty newbie on Proxmox but the basic things are running well for more than a year. I need to do exactly the same thing as you.
 
I'm on the same boat! Did you have any progress on this? I'm pretty newbie on Proxmox but the basic things are running well for more than a year. I need to do exactly the same thing as you.
Just for anyone else googling it seems to of changed now and this is what I had to do.

Bash:
#The example is for 10gb then also resizes the filesystem.
# Ran on 7.4-3

lvextend -L+10G /dev/vg0/root
resize2fs /dev/vg0/root
 
another hijack of that thread..... I am unsuccessful to increase the LVM :-(

pvs
Code:
  PV         VG  Fmt  Attr PSize   PFree
  /dev/sde3  pve lvm2 a--  237.97g    0
vgs
Code:
  VG  #PV #LV #SN Attr   VSize   VFree
  pve   1   3   0 wz--n- 237.97g    0
lvs
Code:
  LV   VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data pve twi-a-tz-- 167.62g             0.00   1.06                          
  root pve -wi-ao----  59.25g                                                  
  swap pve -wi-ao----   8.00g
lsblk
Code:
sde                  8:64   0 931.5G  0 disk
├─sde1               8:65   0  1007K  0 part
├─sde2               8:66   0   512M  0 part
└─sde3               8:67   0   238G  0 part
  ├─pve-swap       253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       253:1    0  59.3G  0 lvm  /
  ├─pve-data_tmeta 253:2    0   1.6G  0 lvm
  │ └─pve-data     253:4    0 167.6G  0 lvm
  └─pve-data_tdata 253:3    0 167.6G  0 lvm
    └─pve-data     253:4    0 167.6G  0 lvm

As you can see, there should be ~650GB left.....
but a "lvextend -L+100G pve/data" brings me Insufficient free space: 25600 extents needed, but only 0 available

Anything I missed?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!