Storage limitations - trying to understand disks, partitions, and volumes

zaddo

New Member
Apr 7, 2025
3
0
1
Hello,

I bought a used Thin Client with 8GB RAM, 64GB SSD, which I'm using to play around with Proxmox, Home Assistant and paperless-ngx. I installed a "default" Proxmox VE (whatever this means) and set up paperless-ngx in an LXC with 10GB successfully and created a VM for Home Assistant OS (HAOS) with the proxmox helper scripts. For some reason, the HAOS VM requires 32GB disk space, which can not be changed.

After some days of basically just letting it run without doing much, I started installing some updates in Home Assistant and afterwards encountered several "Buffer I/O error on device sda8" (see e.g. similar post here https://community.home-assistant.io/t/buffer-i-o-error-on-device-sda8-proxmox-vm/709773). fsck-ing my disk, checking SMART values over and over again I found that my SSD is still in quite good shape, and finally I understood that my local-lvm storage was just overcrowded. After deleting the HAOS VM, I am gaining more and more understanding that my approach "a 10GB LXC plus a 32GB VM should fit on my 64GB SSD" was probably too simple, and that's where my questions begin.

Having a look at my system as it is now (running only the 10GB paperless-ngx LXC), from the Web GUI I deduct that my two storages are as follows:
  • local (pve), Type: Directory, Usage 15.75% (4.12 GB of 26.18 GB) - no backups/ISOs, just a 130MB debian CT template
  • local-lvm (pve), Type: LVM-Thin, Usage 44.10% (8.26 GB of 18.72 GB) - contains a 10.7GB CT Volume
Some more info on my drives and partitions:
Bash:
root@pve:~# pvs
  PV         VG  Fmt  Attr PSize  PFree
  /dev/sda3  pve lvm2 a--  59.12g <7.38g

root@pve:~# vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  pve   1   4   0 wz--n- 59.12g <7.38g

root@pve:~# lvs
  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz--  17.43g             44.10  1.83                          
  root          pve -wi-ao---- <24.94g                                                  
  swap          pve -wi-ao----  <7.38g                                                  
  vm-100-disk-0 pve Vwi-aotz--  10.00g data        76.89                                

root@pve:~# lsblk
NAME                         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda                            8:0    0 59.6G  0 disk
├─sda1                         8:1    0 1007K  0 part
├─sda2                         8:2    0  512M  0 part /boot/efi
└─sda3                         8:3    0 59.1G  0 part
  ├─pve-swap                 252:0    0  7.4G  0 lvm  [SWAP]
  ├─pve-root                 252:1    0 24.9G  0 lvm  /
  ├─pve-data_tmeta           252:2    0    1G  0 lvm
  │ └─pve-data-tpool         252:4    0 17.4G  0 lvm
  │   ├─pve-data             252:5    0 17.4G  1 lvm
  │   └─pve-vm--100--disk--0 252:6    0   10G  0 lvm
  └─pve-data_tdata           252:3    0 17.4G  0 lvm
    └─pve-data-tpool         252:4    0 17.4G  0 lvm
      ├─pve-data             252:5    0 17.4G  1 lvm
      └─pve-vm--100--disk--0 252:6    0   10G  0 lvm

root@pve:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                  3.8G     0  3.8G   0% /dev
tmpfs                 777M  1.5M  776M   1% /run
/dev/mapper/pve-root   25G  3.9G   20G  17% /
tmpfs                 3.8G   46M  3.8G   2% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
efivarfs              384K  106K  274K  28% /sys/firmware/efi/efivars
/dev/sda2             511M   22M  490M   5% /boot/efi
/dev/fuse             128M   20K  128M   1% /etc/pve
tmpfs                 777M     0  777M   0% /run/user/0

root@pve:~# df -h /
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/pve-root   25G  3.9G   20G  17% /

root@pve:~# df -h /dev/sda3
Filesystem      Size  Used Avail Use% Mounted on
udev            3.8G     0  3.8G   0% /dev

I quickly get confused here when it comes to the different storages, volumes, drives, etc., therefore I don't really understand where all my storage went. Searching and reading in this forum in combination with Chat-GPT'ing did not fully resolve my questions. If I'm not completely wrong, the local-lvm will be used by my LXCs and VMs, while the local could be used for backups, ISOs, etc. (?)

I have (among others) read and somewhat understood the following threads and informations related to my questions:
The installer creates a VG called pve, and LVs root, data, swap. Roughly speaking:
swap = ~8GB
maxroot = hdsize/4 = ~ 64GB/4 = ~16GB ? --> root should not be larger than 16GB?
datasize = hdsize - rootsize - swapsize - minfree = 64GB - 16GB (?) - 8GB - 8GB (?) = ~32GB? I don't see this in the settings above, do I?
minfree = hdsize/8 = ~64GB/8 = ~8GB (?) (minfree defines the amount of free space that should be left in the LVM volume group pve) --> roughly 8GB not used?

Besides these thoughts, the following questions have arisen:
  1. With only some 18.7GB "free space available" in total (local-lvm), will it ever be possible to run LXCs and VMs with more than 19GB combined on this hardware?
  2. There appears, to be some free space in the root volume (4GB of 25GB used), which I assume to correspond to the 'local' storage, right? Would it be possible to shift this free space from local to local-lvm, in order to use it for my intended setup?
  3. Considering all possible and reasonable re-configurations of storages, will this ever enable my 10GB LXC + 32GB VM setup running on this hardware (64GB SSD with 8GB RAM)?
  4. I am considering changing to a (slightly!) larger SSD of 128GB, but at this point I'm not even sure if this would be enough here: assuming a change from 64GB to 128GB in SSD size would also roughly double local-lvm from 18.7GB to ~37GB, which would still not be enough for my intended setup. Will the 128GB SSD still be too small?
I appreciate your help! I already spent hours trying to sort this out myself, but I feel at this point an experienced human being might be willing to help me out.
 
Last edited:
pve can live in 12GB, but you would have to be careful with logs and updates overruning your root partition. not ideal but could be done.
you can live without swap.

given those constraints its POSSIBLE to gain access to all space beyond ~13.5G for guest storage use.
 
pve can live in 12GB, but you would have to be careful with logs and updates overruning your root partition. not ideal but could be done.
you can live without swap.

given those constraints its POSSIBLE to gain access to all space beyond ~13.5G for guest storage use.
Hi, thanks so much for your answer!

I played around a bit with lvreduce -L -11GB /dev/pve/root and
lvresize -L +11GB pve/data as well as lvextend -l +100%FREE /dev/pve/data . This didn't bring my local-lvm to the aimed for 32+10GB, but at least >32GB in order to run HAOS in a VM. resize2fs /dev/pve/root didn't work out on-line, so I think I left the file system in a non-optimal state (?).

Turns out now I can't connect to the web GUI any longer, so I assume something went wrong and I'd have to set up my Proxmox VE completely new, this time with a larger 128GB SSD in order to at least have my 10GB LXC plus 32GB VM somewhat functional.

I'm a bit disillusioned that it's not (easily) possible to run HAOS and paperless-ngx on a 64GB SSD though. Well, at least Proxmox is probably not the right choice for this.
 
yeah you really dont want to mess around with a root file system while you are booted to it.

I would reinstall from scratch, making sure to preplan deployment according to what you intend to do. You can also use btrfs instead of lvm/ext and enable inline compression and dynamic subvolumes, just be careful not to let the filesystem get too full or bad things will happen.
 
I installed a "default" Proxmox VE (whatever this means) and set up paperless-ngx in an LXC with 10GB successfully and created a VM for Home Assistant OS (HAOS) with the proxmox helper scripts. For some reason, the HAOS VM requires 32GB disk space, which can not be changed.
I'm a bit disillusioned that it's not (easily) possible to run HAOS and paperless-ngx on a 64GB SSD though. Well, at least Proxmox is probably not the right choice for this.
Please note that the "proxmox" helper scripts (sometimes called community scripts) are not provided or supported by Proxmox. Maybe the people that created those script can help you reduce the size of HAOS: https://github.com/community-scripts/ProxmoxVE/discussions ?

I bought a used Thin Client with 8GB RAM, 64GB SSD, which I'm using to play around with Proxmox,
Please be aware that this is a very minimal hardware for a clustered enterprise hypervisor. It's very flexible, so you might be able to get it all to work but I would not be surprised if you need to be very careful and precise in what you allocation for which VM (instead of running scripts from the internet and assuming that it will all fit). People tend to have more fun with bigger hardware.
 
  • Like
Reactions: zaddo