Hello,
I bought a used Thin Client with 8GB RAM, 64GB SSD, which I'm using to play around with Proxmox, Home Assistant and paperless-ngx. I installed a "default" Proxmox VE (whatever this means) and set up paperless-ngx in an LXC with 10GB successfully and created a VM for Home Assistant OS (HAOS) with the proxmox helper scripts. For some reason, the HAOS VM requires 32GB disk space, which can not be changed.
After some days of basically just letting it run without doing much, I started installing some updates in Home Assistant and afterwards encountered several "Buffer I/O error on device sda8" (see e.g. similar post here https://community.home-assistant.io/t/buffer-i-o-error-on-device-sda8-proxmox-vm/709773). fsck-ing my disk, checking SMART values over and over again I found that my SSD is still in quite good shape, and finally I understood that my local-lvm storage was just overcrowded. After deleting the HAOS VM, I am gaining more and more understanding that my approach "a 10GB LXC plus a 32GB VM should fit on my 64GB SSD" was probably too simple, and that's where my questions begin.
Having a look at my system as it is now (running only the 10GB paperless-ngx LXC), from the Web GUI I deduct that my two storages are as follows:
I quickly get confused here when it comes to the different storages, volumes, drives, etc., therefore I don't really understand where all my storage went. Searching and reading in this forum in combination with Chat-GPT'ing did not fully resolve my questions. If I'm not completely wrong, the local-lvm will be used by my LXCs and VMs, while the local could be used for backups, ISOs, etc. (?)
I have (among others) read and somewhat understood the following threads and informations related to my questions:
Besides these thoughts, the following questions have arisen:
I bought a used Thin Client with 8GB RAM, 64GB SSD, which I'm using to play around with Proxmox, Home Assistant and paperless-ngx. I installed a "default" Proxmox VE (whatever this means) and set up paperless-ngx in an LXC with 10GB successfully and created a VM for Home Assistant OS (HAOS) with the proxmox helper scripts. For some reason, the HAOS VM requires 32GB disk space, which can not be changed.
After some days of basically just letting it run without doing much, I started installing some updates in Home Assistant and afterwards encountered several "Buffer I/O error on device sda8" (see e.g. similar post here https://community.home-assistant.io/t/buffer-i-o-error-on-device-sda8-proxmox-vm/709773). fsck-ing my disk, checking SMART values over and over again I found that my SSD is still in quite good shape, and finally I understood that my local-lvm storage was just overcrowded. After deleting the HAOS VM, I am gaining more and more understanding that my approach "a 10GB LXC plus a 32GB VM should fit on my 64GB SSD" was probably too simple, and that's where my questions begin.
Having a look at my system as it is now (running only the 10GB paperless-ngx LXC), from the Web GUI I deduct that my two storages are as follows:
- local (pve), Type: Directory, Usage 15.75% (4.12 GB of 26.18 GB) - no backups/ISOs, just a 130MB debian CT template
- local-lvm (pve), Type: LVM-Thin, Usage 44.10% (8.26 GB of 18.72 GB) - contains a 10.7GB CT Volume
Bash:
root@pve:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- 59.12g <7.38g
root@pve:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 4 0 wz--n- 59.12g <7.38g
root@pve:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 17.43g 44.10 1.83
root pve -wi-ao---- <24.94g
swap pve -wi-ao---- <7.38g
vm-100-disk-0 pve Vwi-aotz-- 10.00g data 76.89
root@pve:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 59.6G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part /boot/efi
└─sda3 8:3 0 59.1G 0 part
├─pve-swap 252:0 0 7.4G 0 lvm [SWAP]
├─pve-root 252:1 0 24.9G 0 lvm /
├─pve-data_tmeta 252:2 0 1G 0 lvm
│ └─pve-data-tpool 252:4 0 17.4G 0 lvm
│ ├─pve-data 252:5 0 17.4G 1 lvm
│ └─pve-vm--100--disk--0 252:6 0 10G 0 lvm
└─pve-data_tdata 252:3 0 17.4G 0 lvm
└─pve-data-tpool 252:4 0 17.4G 0 lvm
├─pve-data 252:5 0 17.4G 1 lvm
└─pve-vm--100--disk--0 252:6 0 10G 0 lvm
root@pve:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.8G 0 3.8G 0% /dev
tmpfs 777M 1.5M 776M 1% /run
/dev/mapper/pve-root 25G 3.9G 20G 17% /
tmpfs 3.8G 46M 3.8G 2% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
efivarfs 384K 106K 274K 28% /sys/firmware/efi/efivars
/dev/sda2 511M 22M 490M 5% /boot/efi
/dev/fuse 128M 20K 128M 1% /etc/pve
tmpfs 777M 0 777M 0% /run/user/0
root@pve:~# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/pve-root 25G 3.9G 20G 17% /
root@pve:~# df -h /dev/sda3
Filesystem Size Used Avail Use% Mounted on
udev 3.8G 0 3.8G 0% /dev
I quickly get confused here when it comes to the different storages, volumes, drives, etc., therefore I don't really understand where all my storage went. Searching and reading in this forum in combination with Chat-GPT'ing did not fully resolve my questions. If I'm not completely wrong, the local-lvm will be used by my LXCs and VMs, while the local could be used for backups, ISOs, etc. (?)
I have (among others) read and somewhat understood the following threads and informations related to my questions:
- https://forum.proxmox.com/threads/resize-local-or-local-lvm-and-how.105263/
- https://forum.proxmox.com/threads/increase-local-lvm-and-vm-disk-size.121257/
- https://forum.proxmox.com/threads/reduce-local-and-increase-local-lvm-disk-size.124446/
- https://pve.proxmox.com/pve-docs/pve-admin-guide.html#advanced_lvm_options
pve
, and LVs root
, data
, swap
. Roughly speaking: swap
= ~8GBmaxroot
= hdsize/4 = ~ 64GB/4 = ~16GB ? --> root should not be larger than 16GB?datasize = hdsize - rootsize - swapsize - minfree
= 64GB - 16GB (?) - 8GB - 8GB (?) = ~32GB? I don't see this in the settings above, do I?minfree
= hdsize/8 = ~64GB/8 = ~8GB (?) (minfree defines the amount of free space that should be left in the LVM volume group pve) --> roughly 8GB not used?Besides these thoughts, the following questions have arisen:
- With only some 18.7GB "free space available" in total (local-lvm), will it ever be possible to run LXCs and VMs with more than 19GB combined on this hardware?
- There appears, to be some free space in the root volume (4GB of 25GB used), which I assume to correspond to the 'local' storage, right? Would it be possible to shift this free space from local to local-lvm, in order to use it for my intended setup?
- Considering all possible and reasonable re-configurations of storages, will this ever enable my 10GB LXC + 32GB VM setup running on this hardware (64GB SSD with 8GB RAM)?
- I am considering changing to a (slightly!) larger SSD of 128GB, but at this point I'm not even sure if this would be enough here: assuming a change from 64GB to 128GB in SSD size would also roughly double local-lvm from 18.7GB to ~37GB, which would still not be enough for my intended setup. Will the 128GB SSD still be too small?
Last edited: