Hi all,
I'll start this off by saying that this may not be as coherent as I would like it to be - I'm currently abroad and have gone down with covid which isn't helping my problem-solving ability. Thankfully I run Wireguard so have connectivity back to my PVE installation. I'm a relatively new user of PVE and my inexperience has led me to a significant issue - I'm sure that you'll see amny things that I am doing wrong or at the very least inefficiently. I have a VM running HomeAssistant which currently has a status of io-error and one of my containers won't start which appears to be due to the local-lvm storage being maxed out. I believe that I may have used default settings during installation and am not currently using the full space on the SSD. My hardware is as follows:
HP EliteDesk 800 G2 Mini
8 core i7-6700T
16GB RAM
128GB SSD
256GB NVME drive
I have spent a day or so googling for the answer but I can't seem to work it out. Various posts have asked for the outputs of vgdisplay, lvdisplay, lsblk, lvs, vgs, pvs so I have pasted them below - as a new user, I don't really know what I'm looking at. I've also attached some screenshots from PVE. Would someone be able to ELI5 what I need to do to get out of this mess? If you need more information then let me know and I will provide it. I think I have unallocated space on the SSD as the local storage is 31GB and the local-lvm is 69GB - can I access / allocate the extra ~28GB?
Thank you in advance and hopefully, I've been able to put this together in a coherent manner!
vgdisplay
lvdisplay
lsblk
lvs
vgs
pvs
I'll start this off by saying that this may not be as coherent as I would like it to be - I'm currently abroad and have gone down with covid which isn't helping my problem-solving ability. Thankfully I run Wireguard so have connectivity back to my PVE installation. I'm a relatively new user of PVE and my inexperience has led me to a significant issue - I'm sure that you'll see amny things that I am doing wrong or at the very least inefficiently. I have a VM running HomeAssistant which currently has a status of io-error and one of my containers won't start which appears to be due to the local-lvm storage being maxed out. I believe that I may have used default settings during installation and am not currently using the full space on the SSD. My hardware is as follows:
HP EliteDesk 800 G2 Mini
8 core i7-6700T
16GB RAM
128GB SSD
256GB NVME drive
I have spent a day or so googling for the answer but I can't seem to work it out. Various posts have asked for the outputs of vgdisplay, lvdisplay, lsblk, lvs, vgs, pvs so I have pasted them below - as a new user, I don't really know what I'm looking at. I've also attached some screenshots from PVE. Would someone be able to ELI5 what I need to do to get out of this mess? If you need more information then let me know and I will provide it. I think I have unallocated space on the SSD as the local storage is 31GB and the local-lvm is 69GB - can I access / allocate the extra ~28GB?
Thank you in advance and hopefully, I've been able to put this together in a coherent manner!
vgdisplay
root@hestia:~# vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 105
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 8
Open LV 6
Max PV 0
Cur PV 1
Act PV 1
VG Size <118.74 GiB
PE Size 4.00 MiB
Total PE 30397
Alloc PE / Size 26621 / <103.99 GiB
Free PE / Size 3776 / 14.75 GiB
VG UUID Or1yHp-LhY9-lXqn-UnFr-ey2z-27lJ-3LdIyK
--- Volume group ---
VG Name local-nvme
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 8
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 232.88 GiB
PE Size 4.00 MiB
Total PE 59618
Alloc PE / Size 8192 / 32.00 GiB
Free PE / Size 51426 / 200.88 GiB
VG UUID sAHxd5-8nap-JUc2-HbWT-VikT-oYzO-hqdjQ8
lvdisplay
root@hestia:~# lvdisplay
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID WcH39j-t9hB-GzPE-oXpA-Zuf2-MdOE-YqYmm0
LV Write Access read/write
LV Creation host, time proxmox, 2022-02-04 17:38:05 +0000
LV Status available
# open 2
LV Size 8.00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID gTauCp-i41h-CEuB-UCRF-3Tgd-vb49-85yxMc
LV Write Access read/write
LV Creation host, time proxmox, 2022-02-04 17:38:05 +0000
LV Status available
# open 1
LV Size 29.50 GiB
Current LE 7552
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
--- Logical volume ---
LV Name data
VG Name pve
LV UUID PQh1NE-Z4cL-opRb-lUmF-gwgf-SN5E-eaMTgh
LV Write Access read/write (activated read only)
LV Creation host, time proxmox, 2022-02-04 17:38:09 +0000
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 0
LV Size <64.49 GiB
Allocated pool data 100.00%
Allocated metadata 4.60%
Current LE 16509
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:6
--- Logical volume ---
LV Path /dev/pve/vm-101-disk-0
LV Name vm-101-disk-0
VG Name pve
LV UUID fpydMC-codS-F0OR-LRZT-VULg-jL20-GTSZ9t
LV Write Access read/write
LV Creation host, time hestia, 2022-02-06 09:26:47 +0000
LV Pool name data
LV Status available
# open 1
LV Size 4.00 GiB
Mapped size 96.44%
Current LE 1024
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:7
--- Logical volume ---
LV Path /dev/pve/vm-102-disk-0
LV Name vm-102-disk-0
VG Name pve
LV UUID JU0R9d-T58t-31jk-pOKT-b3Cq-FWUZ-cdQzu7
LV Write Access read/write
LV Creation host, time hestia, 2022-02-06 11:57:21 +0000
LV Pool name data
LV Status available
# open 1
LV Size 2.00 GiB
Mapped size 99.28%
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:8
--- Logical volume ---
LV Path /dev/pve/vm-103-disk-0
LV Name vm-103-disk-0
VG Name pve
LV UUID 10rKOG-TFn2-DuPu-qISl-Tijk-HoBW-sch3mi
LV Write Access read/write
LV Creation host, time hestia, 2022-02-06 14:15:47 +0000
LV Pool name data
LV Status available
# open 0
LV Size 64.00 GiB
Mapped size 46.86%
Current LE 16384
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:9
--- Logical volume ---
LV Path /dev/pve/vm-104-disk-0
LV Name vm-104-disk-0
VG Name pve
LV UUID J74WX9-7XfE-ZUBb-TYrz-8MTm-1Y4z-HWv6AL
LV Write Access read/write
LV Creation host, time hestia, 2022-02-07 09:04:01 +0000
LV Pool name data
LV Status available
# open 1
LV Size 4.00 MiB
Mapped size 0.00%
Current LE 1
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:10
--- Logical volume ---
LV Path /dev/pve/vm-104-disk-1
LV Name vm-104-disk-1
VG Name pve
LV UUID 9Uqaoj-V9H4-k6SL-OPEL-L8on-BiMv-ypriWR
LV Write Access read/write
LV Creation host, time hestia, 2022-02-07 09:04:02 +0000
LV Pool name data
LV Status available
# open 1
LV Size 32.00 GiB
Mapped size 89.54%
Current LE 8192
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:11
--- Logical volume ---
LV Path /dev/local-nvme/vm-104-disk-0
LV Name vm-104-disk-0
VG Name local-nvme
LV UUID NcKCBB-7gCP-D3Yc-o2V3-yxcN-ChWc-h6ywju
LV Write Access read/write
LV Creation host, time hestia, 2022-02-09 10:19:30 +0000
LV Status available
# open 1
LV Size 32.00 GiB
Current LE 8192
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
lsblk
root@hestia:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 119.2G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part /boot/efi
└─sda3 8:3 0 118.7G 0 part
├─pve-swap 253:1 0 8G 0 lvm [SWAP]
├─pve-root 253:2 0 29.5G 0 lvm /
├─pve-data_tmeta 253:3 0 1G 0 lvm
│ └─pve-data-tpool 253:5 0 64.5G 0 lvm
│ ├─pve-data 253:6 0 64.5G 1 lvm
│ ├─pve-vm--101--disk--0 253:7 0 4G 0 lvm cdQzu7
│ ├─pve-vm--102--disk--0 253:8 0 2G 0 lvm
│ ├─pve-vm--103--disk--0 253:9 0 64G 0 lvm 0
│ ├─pve-vm--104--disk--0 253:10 0 4M 0 lvm
│ └─pve-vm--104--disk--1 253:11 0 32G 0 lvm
└─pve-data_tdata 253:4 0 64.5G 0 lvm
└─pve-data-tpool 253:5 0 64.5G 0 lvm
├─pve-data 253:6 0 64.5G 1 lvm
├─pve-vm--101--disk--0 253:7 0 4G 0 lvm
├─pve-vm--102--disk--0 253:8 0 2G 0 lvm
├─pve-vm--103--disk--0 253:9 0 64G 0 lvm
├─pve-vm--104--disk--0 253:10 0 4M 0 lvm
└─pve-vm--104--disk--1 253:11 0 32G 0 lvm
nvme0n1 259:0 0 232.9G 0 disk
└─local--nvme-vm--104--disk--0 253:0 0 32G 0 lvm
lvs
root@hestia:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
vm-104-disk-0 local-nvme -wi-ao---- 32.00g
data pve twi-aotzD- <64.49g 100.00 4.60
root pve -wi-ao---- 29.50g
swap pve -wi-ao---- 8.00g
vm-101-disk-0 pve Vwi-aotz-- 4.00g data 96.44
vm-102-disk-0 pve Vwi-aotz-- 2.00g data 99.28
vm-103-disk-0 pve Vwi-a-tz-- 64.00g data 46.86
vm-104-disk-0 pve Vwi-aotz-- 4.00m data 0.00
vm-104-disk-1 pve Vwi-aotz-- 32.00g data 89.54
vgs
root@hestia:~# vgs
VG #PV #LV #SN Attr VSize VFree
local-nvme 1 1 0 wz--n- 232.88g 200.88g
pve 1 8 0 wz--n- <118.74g 14.75g
pvs
root@hestia:~# pvs
PV VG Fmt Attr PSize PFree
/dev/nvme0n1 local-nvme lvm2 a-- 232.88g 200.88g
/dev/sda3 pve lvm2 a-- <118.74g 14.75g