[SOLVED] Out of space, not using whole disk

leroadrunner

New Member
Aug 3, 2023
11
0
1
Hi, suddenly I had IO errors on a VM. Well it turns out I ran out of space. Surprised as I know my disk in 1TB and MyVms should be 100G max. Well, it turns out when I installed ProxMox it is not using it. How do I fix it?

Here is what I have

root@pve:/# fdisk -l Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Disk model: ST1000LM048-2E71 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: FDD1A884-18AF-4FFF-92FA-3E411564BB2D Device Start End Sectors Size Type /dev/sda1 34 2047 2014 1007K BIOS boot /dev/sda2 2048 1050623 1048576 512M EFI System /dev/sda3 1050624 1953525134 1952474511 931G Linux LVM Partition 1 does not start on physical sector boundary. Disk /dev/mapper/pve-swap: 7 GiB, 7516192768 bytes, 14680064 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/mapper/pve-vm--101--disk--0: 32 GiB, 34359738368 bytes, 67108864 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 65536 bytes / 65536 bytes Disklabel type: dos Disk identifier: 0x3a8b4987 Device Boot Start End Sectors Size Id Type /dev/mapper/pve-vm--101--disk--0-part1 * 2048 206847 204800 100M 7 HPFS/NTFS/exFAT /dev/mapper/pve-vm--101--disk--0-part2 206848 67106815 66899968 31.9G 7 HPFS/NTFS/exFAT

And




root@pve:/# df -h Filesystem Size Used Avail Use% Mounted on udev 3.9G 0 3.9G 0% /dev tmpfs 789M 78M 711M 10% /run /dev/mapper/pve-root 94G 94G 0 100% / tmpfs 3.9G 25M 3.9G 1% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/fuse 30M 20K 30M 1% /etc/pve tmpfs 789M 0 789M 0% /run/user/0 root@pve:/#

So it seems only 100G is used, how do I extend it to the rest?

Thanks
 
You can grow the root partition by 200GiB with the following command:
Code:
lvextend --resizefs -L +200G /dev/mapper/pve-root
 
Last edited:
Thanks, I tried that, but got:

root@pve:/var/lib/vz/template/iso# lvextend --resizefs -L +200G /dev/mapper/pve-root Insufficient free space: 51200 extents needed, but only 4095 available

:-(
 
root@pve:/var/lib/vz/template/iso# lvextend -l +100%FREE /dev/mapper/pve-root Size of logical volume pve/root changed from 96.00 GiB (24576 extents) to <112.00 GiB (28671 extents). Logical volume pve/root successfully resized.

It extended just a bit. How do I get it to use all the space I have? I just did plain install, thought it would take it all.
 
lsblk

Code:
root@pve:/# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0 931.5G  0 disk
├─sda1                         8:1    0  1007K  0 part
├─sda2                         8:2    0   512M  0 part
└─sda3                         8:3    0   931G  0 part
  ├─pve-swap                 253:0    0     7G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0   112G  0 lvm  /
  ├─pve-data_tmeta           253:2    0   8.1G  0 lvm
  │ └─pve-data-tpool         253:4    0 795.8G  0 lvm
  │   ├─pve-data             253:5    0 795.8G  0 lvm
  │   └─pve-vm--101--disk--0 253:6    0    32G  0 lvm
  └─pve-data_tdata           253:3    0 795.8G  0 lvm
    └─pve-data-tpool         253:4    0 795.8G  0 lvm
      ├─pve-data             253:5    0 795.8G  0 lvm
      └─pve-vm--101--disk--0 253:6    0    32G  0 lvm
root@pve:/# pvs
  PV         VG  Fmt  Attr PSize    PFree
  /dev/sda3  pve lvm2 a--  <931.01g    0
root@pve:/# vgs
  VG  #PV #LV #SN Attr   VSize    VFree
  pve   1   4   0 wz--n- <931.01g    0
root@pve:/# lvs
  LV            VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- <795.77g             0.54   0.26
  root          pve -wi-ao---- <112.00g
  swap          pve -wi-ao----    7.00g
  vm-101-disk-0 pve Vwi-a-tz--   32.00g data        13.33
 
As you can see your disk is split in two parts at high level:
- root volume where your OS/PVE is installed. This volume is 100G
- data volume which is presented as LVM pool from which you can further slice up for your VM (done by PVE automatically). This pool is ~800G

Your space at this point is almost entirely allocated and you cannot expand the root volume without removing the data pool.
However, you have already used the data pool to allocate some VM disks, so you need to either move or delete them first.

On the other hand, why not just use the Data pool to continue allocate your disks. Move your disks that are currently on "local" storage to "local-lvm".
The "local" is pointing to a directory on your Root volume and so competes for space with OS.

https://pve.proxmox.com/wiki/Storage
https://pve.proxmox.com/wiki/Logical_Volume_Manager_(LVM)
https://pve.proxmox.com/wiki/Storage:_Directory


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Thank you for the explanation, did not understand that separation.
I found the "Move disk button" hopefully it does the trick. And I'll make sure to create VMs in there in the future.
 
Ok, great, the moved worked.

The old disk is still there. When I try to "remove" it, I get an error such as

Code:
Cannot remove image, a guest with VMID '104' exists!

You can delete the image from the guest's hardware pane

But the hardware pane is no longer linked to "local" but to "local-lvm".

Should I just delete it manually from the shell?

UPDATE: Never mind, found it. Needed to remove from "Unused list" on VM.

Thank you all!
 
Last edited:
The old disk is still there. When I try to "remove" it, I get an error such as
There was an option "delete source" in the move wizard, you did not use it, so the disk is left behind.

But the hardware pane is no longer linked to "local" but to "local-lvm".
Is there no "unused" disk in the list? If not, you can do "qm disk rescan" to bring it in and then delete.
You can delete from shell if you are sure that you know what you are doing.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!