use all available space from local-lvm

Benjamin7785

New Member
Feb 6, 2022
11
1
1
38
Hi everyone,
I am quite new to Proxmox but made great experience so far.
There is only one thing that is driving me a little crazy. And that is about storage use.
1644154340773.png
1644154858588.png
As you can see "local (pve)" and "local-edomi (pve) need a little more space from/of local-lvm. But I have no clue how to allocate more space to it.
It seems that local and local-edomi take up a total of 100Gb together. But local-lvm has a total of 500Gb available. So in theory I should be able to give more disk space to those two disks.
But I cannot find any setting, that tells PROXMOX that it is limited to use 100Gb only.

Once I tried adding an external SSDdrive "ExtremeSSDDrive" to store backups on. I thought that would free space. but infact it shows up as almost full as well, even though it physically is 500Gb as well. And even if I try to put backups and the external ssdDrive it takes space from local (pve), too.

So currently, my VMs can only use 100Gb in total, because I don't know how to make the rest of the 500Gb available.

I know I am missing a fundamental thing here in my understandig of the storage in PROXMOX. But I need some hands on here as I seem to not find the solution within the documentation.

Thanks for being kind :)

In case you need any cli output please let me know. I'll post it then.
 
Last edited:
I cannot answer that. I performed these steps after a fresh install. Others may have more knowledge on that. I would assume you need to do it first before creating anything else.
 
You could move the WinVM from your local-lvm to your ExtremeSSDDrive first in case you got enough space there.
The big question is...why is your local using 100GB? PVE usually only needs below 10GB. If you aren't using 90+GB for ISOs/Templates/Backups you should check whats using all that space.

You could run som ething like du -a / | sort -n -r | head -n 20 so search for the 20 biggest files/folders.
 
Last edited:
  • Like
Reactions: Tmanok
You could move the WinVM from your local-lvm to your ExtremeSSDDrive first in case you got enough space there.
The big question is...why is your local using 100GB? PVE usually only needs below 10GB. If you aren't using 90+GB for ISOs/Templates/Backups you should check whats using all that space.

You could run som ething like du -a / | sort -n -r | head -n 20 so search for the 20 biggest files/folders.
I would love to do that.

But as far as I can see all disks (ExtremeSSDDrive,local,local-edomi) seems to be the same storage as they are exactly the same size. (pictures attached) So 103-edomi and 102-loxberry2 are using this space, right?
When I plug the external ExtremeSSDDrive into another computer it shows more much more available space (>200GB).
Just recently I snapshoted 103-edomi on the ExtremeSSDDrive. But then then the SpaceUsed Value on all 3 disks (ExtremeSSDDrive,local,local-edomi) increased.
Obviously, I did something wrong when trying to make this external disk available in proxmox. My intention was to use it for backups only....
Can I revert that?

1645465768978.png1645465780183.png1645465787930.png
 
Whats your output of lsblk, pvdisplay, vgdisplay, lvdisplay and cat /etc/pve/storage.cfg?
 
Hi, thanks for your attention to this.

lsblk
Code:
root@pve:~# lsblk
NAME                                             MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                                8:0    0 465.8G  0 disk
└─sda1                                             8:1    0 465.8G  0 part
nvme0n1                                          259:0    0 465.8G  0 disk
├─nvme0n1p1                                      259:1    0  1007K  0 part
├─nvme0n1p2                                      259:2    0   512M  0 part /boot/efi
└─nvme0n1p3                                      259:3    0 465.3G  0 part
  ├─pve-swap                                     253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                                     253:1    0    96G  0 lvm  /
  ├─pve-data_tmeta                               253:2    0   3.5G  0 lvm 
  │ └─pve-data-tpool                             253:4    0 338.4G  0 lvm 
  │   ├─pve-data                                 253:5    0 338.4G  1 lvm 
  │   ├─pve-vm--100--disk--0                     253:6    0    96G  0 lvm 
  │   ├─pve-vm--100--disk--1                     253:7    0     4M  0 lvm 
  │   └─pve-vm--100--state--Snapshot220420211232 253:8    0  24.5G  0 lvm 
  └─pve-data_tdata                               253:3    0 338.4G  0 lvm 
    └─pve-data-tpool                             253:4    0 338.4G  0 lvm 
      ├─pve-data                                 253:5    0 338.4G  1 lvm 
      ├─pve-vm--100--disk--0                     253:6    0    96G  0 lvm 
      ├─pve-vm--100--disk--1                     253:7    0     4M  0 lvm 
      └─pve-vm--100--state--Snapshot220420211232 253:8    0  24.5G  0 lvm

pvdisplay
Code:
root@pve:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/nvme0n1p3
  VG Name               pve
  PV Size               465.26 GiB / not usable <3.01 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              119106
  Free PE               4095
  Allocated PE          115011
  PV UUID               OnDIJB-Inoq-ETwL-hd14-cxYd-bzHE-hZLLDH

vgdisplay
Code:
root@pve:~# vgdisplay
  --- Volume group ---
  VG Name               pve
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  23
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                7
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <465.26 GiB
  PE Size               4.00 MiB
  Total PE              119106
  Alloc PE / Size       115011 / 449.26 GiB
  Free  PE / Size       4095 / <16.00 GiB
  VG UUID               L04ckz-8R0P-qrNs-fayW-cfKX-BwH7-8GSOrq

lvdisplay
Code:
root@pve:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                vcKCMo-BglS-lOpo-uBLn-XCZq-SEvp-9iHbRE
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-03-10 00:47:49 +0100
  LV Status              available
  # open                 2
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
  
  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                voftF3-aSkj-Li1w-HG3z-SgVX-646D-0NZURI
  LV Write Access        read/write
  LV Creation host, time proxmox, 2021-03-10 00:47:49 +0100
  LV Status              available
  # open                 1
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
  
  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                kxBCmC-lRPz-pzag-QqDQ-IRht-gIsH-aDrisw
  LV Write Access        read/write (activated read only)
  LV Creation host, time proxmox, 2021-03-10 00:47:49 +0100
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                <338.36 GiB
  Allocated pool data    17.49%
  Allocated metadata     1.32%
  Current LE             86619
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:5
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-100-disk-0
  LV Name                vm-100-disk-0
  VG Name                pve
  LV UUID                Z8LMAo-orQg-EXvM-4lcw-erBN-tAoM-uOMkP2
  LV Write Access        read/write
  LV Creation host, time pve, 2021-03-09 17:36:53 +0100
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                96.00 GiB
  Mapped size            41.68%
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-100-disk-1
  LV Name                vm-100-disk-1
  VG Name                pve
  LV UUID                vb42b1-LA6M-T5RB-851o-kUZw-e05J-OQSY4w
  LV Write Access        read/write
  LV Creation host, time pve, 2021-03-09 19:58:18 +0100
  LV Pool name           data
  LV Status              available
  # open                 0
  LV Size                4.00 MiB
  Mapped size            3.12%
  Current LE             1
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:7
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-100-state-Snapshot220420211232
  LV Name                vm-100-state-Snapshot220420211232
  VG Name                pve
  LV UUID                BQexhR-8CRn-czne-qD33-gXFK-E4T9-W9P7Yl
  LV Write Access        read/write
  LV Creation host, time pve, 2021-04-22 12:33:13 +0200
  LV Pool name           data
  LV Status              available
  # open                 0
  LV Size                <24.49 GiB
  Mapped size            7.57%
  Current LE             6269
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:8
  
  --- Logical volume ---
  LV Path                /dev/pve/snap_vm-100-disk-0_Snapshot220420211232
  LV Name                snap_vm-100-disk-0_Snapshot220420211232
  VG Name                pve
  LV UUID                EqM6X3-y36B-YWPL-mEj9-8Wun-RWSZ-q0WZy0
  LV Write Access        read only
  LV Creation host, time pve, 2021-04-22 12:33:18 +0200
  LV Pool name           data
  LV Thin origin name    vm-100-disk-0
  LV Status              NOT available
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

cat /etc/pve/storage.cfg
Code:
root@pve:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content rootdir,vztmpl,backup,images,snippets,iso
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

dir: ExtremeSSDDrive
        path /media/ExtremeSSDDrive
        content backup,images
        prune-backups keep-all=1
        shared 0

dir: local-edomi
        path /var/lib/vzedomi
        content images,snippets,iso,rootdir,vztmpl,backup
        prune-backups keep-last=1,keep-weekly=1
        shared 0
 
You "local-edomi" doesn't make much sense as it shares the root fs with "local" and both also got the same content types. So everything you are storing on "local-edomi" you could also just store on "local".

If "ExtremeSSDDrive" shows you the same size as local than thats because "/media/ExtremeSSDDrive" isn't mounted. I guess in the mountpoint "/media/ExtremeSSDDrive" your 500GB sda1 should be mounted. If its not mounted everything written to "/media/ExtremeSSDDrive" will end up on your root filesystem (so its written to the NVMe instead).

Check your /etc/fstab why sda1 isn't mounted.

And you also might want to set the "is_mountpoint" option for your directory storages: pvesm set ExtremeSSDDrive --is_mountpoint yes
 
Last edited:
  • Like
Reactions: Tmanok
So much of what you are saying I can agree on.

In fact I created local-edomi only because I was trying to free space on local :) Hence, I messed up----
And when I check out the file system all files remain in "/media/ExtremeSSDDrive" even if the drive is physically removed from the system.

So first thing now is to get "/media/ExtremeSSDDrive" mounted.

/etc/fstab
Code:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=9266-1797 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

fdisk -l
Code:
...

Disk /dev/sda: 465.76 GiB, 500107837440 bytes, 976773120 sectors
Disk model: Extreme SSD     
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 1048576 bytes
Disklabel type: dos
Disk identifier: 0x0c340c60

Device     Boot Start       End   Sectors   Size Id Type
/dev/sda1        2048 976768064 976766017 465.8G  7 HPFS/NTFS/exFAT

...

I double checked the ExtremeSSDDrive and currently it is exFAT. It seems as if some snapshot have been saved on it. Anyway, I make it ext4 and then try to mount it with
Code:
mount -t ntfs /dev/sda1  /media/ExtremeSSDDrive
right?

How do I make sure that after every reboot the drive is mounted again? do I need to edit fstab?
 
/etc/fstab
Code:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=9266-1797 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
You got no fstab entry so your disk won't be mounted on boot.
I double checked the ExtremeSSDDrive and currently it is exFAT. It seems as if some snapshot have been saved on it. Anyway, I make it ext4 and then try to mount it with
Code:
mount -t ntfs /dev/sda1  /media/ExtremeSSDDrive
right?

How do I make sure that after every reboot the drive is mounted again? do I need to edit fstab?
No, should be mount -t ext4 /dev/sda1 /media/ExtremeSSDDrive then. And this will only temporarily mount it. If you want that mount to persist reboots you need to add the disk to your fstab.

And I wouldn't mount disks with "/dev/sdX". Its better to mount it by its unique ID. Use ls -la /dev/disk/by-uuid to find out the ID of your sda1 partition and then mount it with mount -t ext4 /dev/disk/by-uuid/IdOfYourSda1Partition /media/ExtremeSSDDrive.

Also backup everything that is stored in "/media/ExtremeSSDDrive" and then delete all contents inside "/media/ExtremeSSDDrive" before mounting your sda1 because otherwise you won't be able to access the data anymore but they will still consume space.
 
Thanks for the advise. I've succesfully mounted the ExtremeSSDDrive now and added an appropriate fstab line
/etc/fstab
Code:
UUID=80ca7f07-634d-e74f-a5fd-31b4b48fbe8d /media/ExtremeSSDDrive ext4 defaults 0 1

Having done that thanks to your advise.

I am now asking my self if hitting the "move disk"-Button is the way to go.
1645517222970.png

Can I move this one now to ExtremeSSDDrive in order to delete "local-lvm" afterwards?
Because once local-lvm is not used anymore I should be able to allocate all free storage to local, right?
Is this the moment to use the commands from the video that was referenced here by genesis1?
lvremove /dev/pve/data
lvresize -l +100%FREE /dev/pve/root
resize2fs /dev/mapper/pve-root

Or is there better code to achieve the goal of more space on "local". Because lvremove /dev/pve/data feels a little rough on a machine that is running vm's already. And in fact tehre is no folder "data" under /dev/pve/
1645518479636.png
Proxomx shows me this
1645518818878.png
Any advice would be highly appreciated.

Thank you.
 
Last edited:
Thanks for the advise. I've succesfully mounted the ExtremeSSDDrive now and added an appropriate fstab line
/etc/fstab
Code:
UUID=80ca7f07-634d-e74f-a5fd-31b4b48fbe8d /media/ExtremeSSDDrive ext4 defaults 0 1

Having done that thanks to your advise.

I am now asking my self if hitting the "move disk"-Button is the way to go.
View attachment 34491

Can I move this one now to ExtremeSSDDrive in order to delete "local-lvm" afterwards?
Because once local-lvm is not used anymore I should be able to allocate all free storage to local, right?
You can use the "move disk" button to move the VM to the ExtemeSSDDrive storage.
But did you done this?
Also backup everything that is stored in "/media/ExtremeSSDDrive" and then delete all contents inside "/media/ExtremeSSDDrive" before mounting your sda1 because otherwise you won't be able to access the data anymore but they will still consume space.
Because your local is probably only running out of space because you wrote stuff accidentally to "local" instead to "ExtremeSSDDrive" because your mount wasn't working. I guess after removing this data your "local" would be nearly empty in case you don't stored alot of backups there.
 
Last edited:
  • Like
Reactions: Tmanok
Hi Dunuin,

Yes I did check /media/ExtremeSSDDrive before mounting.

For your reference, I listed all the large files once. Seems like *.raw files on local are taking up the space.
Code:
root@pve:~# du -a / | sort -n -r | head -n 100
du: cannot access '/proc/580678/task/580678/fd/3': No such file or directory
du: cannot access '/proc/580678/task/580678/fdinfo/3': No such file or directory
du: cannot access '/proc/580678/fd/4': No such file or directory
du: cannot access '/proc/580678/fdinfo/4': No such file or directory
du: cannot access '/var/lib/lxcfs/cgroup': Input/output error
112743707       /
76002204        /var
75129068        /var/lib
63712152        /var/lib/vzedomi
63441260        /var/lib/vzedomi/images
63441256        /var/lib/vzedomi/images/103
63441252        /var/lib/vzedomi/images/103/vm-103-disk-0.raw
32604284        /media
32604276        /media/ExtremeSSDDrive
32604252        /media/ExtremeSSDDrive/dump
26105676        /media/ExtremeSSDDrive/dump/vzdump-qemu-100-2022_02_22-08_41_45.vma.zst
11294800        /var/lib/vz
10890896        /var/lib/vz/images
10890892        /var/lib/vz/images/102
10890888        /var/lib/vz/images/102/vm-102-disk-0.raw
4272640 /media/ExtremeSSDDrive/dump/vzdump-qemu-102-2022_02_22-08_32_33.vma.zst
3844296 /usr
2225908 /media/ExtremeSSDDrive/dump/vzdump-lxc-103-2022_02_22-08_28_23.tar.zst
2076208 /usr/lib
1348088 /usr/share
944864  /usr/lib/modules
800728  /var/cache
533764  /usr/lib/x86_64-linux-gnu
454600  /var/cache/apt
403888  /var/lib/vz/template
403876  /var/lib/vz/template/iso
403872  /var/lib/vz/template/iso/virtio-win-0.1.185_2.iso

But somehow I am expecting that those VMs and Containers are taking up space here.

Am I wrong? as far as I understand the *.raw file is what the vm/container is all about?
 
Hi Dunuin,

Yes I did check /media/ExtremeSSDDrive before mounting.

For your reference, I listed all the large files once. Seems like *.raw files on local are taking up the space.
Code:
root@pve:~# du -a / | sort -n -r | head -n 100
du: cannot access '/proc/580678/task/580678/fd/3': No such file or directory
du: cannot access '/proc/580678/task/580678/fdinfo/3': No such file or directory
du: cannot access '/proc/580678/fd/4': No such file or directory
du: cannot access '/proc/580678/fdinfo/4': No such file or directory
du: cannot access '/var/lib/lxcfs/cgroup': Input/output error
112743707       /
76002204        /var
75129068        /var/lib
63712152        /var/lib/vzedomi
63441260        /var/lib/vzedomi/images
63441256        /var/lib/vzedomi/images/103
63441252        /var/lib/vzedomi/images/103/vm-103-disk-0.raw
32604284        /media
32604276        /media/ExtremeSSDDrive
32604252        /media/ExtremeSSDDrive/dump
26105676        /media/ExtremeSSDDrive/dump/vzdump-qemu-100-2022_02_22-08_41_45.vma.zst
11294800        /var/lib/vz
10890896        /var/lib/vz/images
10890892        /var/lib/vz/images/102
10890888        /var/lib/vz/images/102/vm-102-disk-0.raw
4272640 /media/ExtremeSSDDrive/dump/vzdump-qemu-102-2022_02_22-08_32_33.vma.zst
3844296 /usr
2225908 /media/ExtremeSSDDrive/dump/vzdump-lxc-103-2022_02_22-08_28_23.tar.zst
2076208 /usr/lib
1348088 /usr/share
944864  /usr/lib/modules
800728  /var/cache
533764  /usr/lib/x86_64-linux-gnu
454600  /var/cache/apt
403888  /var/lib/vz/template
403876  /var/lib/vz/template/iso
403872  /var/lib/vz/template/iso/virtio-win-0.1.185_2.iso

But somehow I am expecting that those VMs and Containers are taking up space here.

Am I wrong? as far as I understand the *.raw file is what the vm/container is all about?
Jup, move those VMs/LXCs from "local" and "local-edomi" to "ExtremeSSDDrive". VM103 is still on "local-edomi". And you got backups on "ExtremeSSDDrive". Either use "ExtremeSSDDrive" for backups or VMs/LXCs but never use it both at the same time if you don't want to loose your backups togehter with your VMs so you are standing there with nothing in case the NVMe dies.
 
Understood.

Main idea was to store backups on ExtremeSSDDrive. All VMs/LXCs are then running on local.

I now achieved to delete everything on "local-lvm".
how do I delete "local-lvm" now without removing files on "local"

once again I don't think that those commands will do the job because they probably mess with data on "local"
lvremove /dev/pve/data
lvresize -l +100%FREE /dev/pve/root
resize2fs /dev/mapper/pve-root

Please let me have some more precises commands to get rid of local-lvm, then resize root and allocate it.

Unfortunately, I cannot move the LXC to ExtremeSSDDrive because it says that mount point is "rootfs".

If I could move the LXC to ExtremeSSDDrive I would go with the 3 commands mentioned on top.

I feel like I am so close to max local space to the full approx 500GB :)
 
Done! o_O

I manually removed the "local-lvm" storage using the gui.

Then I used these commands:
lvremove /dev/pve/data
lvresize -l +100%FREE /dev/pve/root
resize2fs /dev/mapper/pve-root

In fact it doesn't seem to have any impact on files stored in "local". Honestly, I do not 100% understand what the first command is doing. *Magic* I suppose.

Anyway, thank you so much for helping!

I've learned quiet a bit and it certanly feels much better, knowing you hve proper backups and vital VMs/LXCs moved to external storage temporarly.

Next time I'll do all that stuff on a fresh install on proxmox. Now I have all my storage space available. That feels good.
:)

This thread here gave me confidence in executing those commands:
https://forum.proxmox.com/threads/need-to-delete-local-lvm-and-reuse-the-size.34087/
 
Last edited:
  • Like
Reactions: alexdelprete
Main idea was to store backups on ExtremeSSDDrive. All VMs/LXCs are then running on local.
But what I don't understand is why you want to use the slow SATA SSD for your guests and the fast NVMe for backups with both being the same size. Should be the opposite.
 
Last edited:
  • Like
Reactions: Tmanok

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!