Hi,
I cannot figure out why and I'm not a Proxmox pro here. But I'm out of disk space on my host SSD.
I have all my VMs stored on a secondary SSD so basically, all space on that boot SSD (500GB) could be available.
So 100% usage on /dev/mapper/pvr-root
Yet reported used space is modest...
/var/log isn't that big.
Finally lsblk reports a vm disk on that SSD I am not sure is in use... (EDIT: It looks like a EFI partition for a VM, any way to move it to the secondary SSD?)
pveversion for good mesure
So why am I running out of space when there is seemingly 94GB of space allocated for root partition while only 7.5GB is in use?
Is there a way to fix this without reinstalling Proxmox? I kinda want to avoid as much downtime as possible.
Thanks
I cannot figure out why and I'm not a Proxmox pro here. But I'm out of disk space on my host SSD.
I have all my VMs stored on a secondary SSD so basically, all space on that boot SSD (500GB) could be available.
Code:
root@pve-server:/# df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 1.2M 3.2G 1% /run
/dev/mapper/pve-root 94G 90G 0 100% /
tmpfs 16G 60M 16G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/nvme0n1p2 1022M 344K 1022M 1% /boot/efi
/dev/fuse 128M 40K 128M 1% /etc/pve
192.168.10.254:/4TB1 3.6T 2.8T 678G 81% /mnt/pve/OMV_4TB1
192.168.10.254:/Parity1 3.7T 2.1T 1.5T 59% /mnt/pve/OMV_Parity1
192.168.10.254:/mergerfs01 76T 35T 38T 48% /mnt/pve/OMV_mergerfs01
192.168.10.254:/Parity2 3.6T 132G 3.3T 4% /mnt/pve/OMV_Parity2
tmpfs 3.2G 0 3.2G 0% /run/user/0
So 100% usage on /dev/mapper/pvr-root
Code:
root@pve-server:/# du -hsx
7.5G .
/var/log isn't that big.
Finally lsblk reports a vm disk on that SSD I am not sure is in use... (EDIT: It looks like a EFI partition for a VM, any way to move it to the secondary SSD?)
Code:
root@pve-server:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1 259:0 0 465.8G 0 disk
├─nvme0n1p1 259:1 0 1007K 0 part
├─nvme0n1p2 259:2 0 1G 0 part /boot/efi
└─nvme0n1p3 259:3 0 464.8G 0 part
├─pve-swap 253:2 0 32G 0 lvm [SWAP]
├─pve-root 253:3 0 96G 0 lvm /
├─pve-data_tmeta 253:4 0 3.2G 0 lvm
│ └─pve-data-tpool 253:6 0 314.3G 0 lvm
│ ├─pve-data 253:7 0 314.3G 1 lvm
│ └─pve-vm--200--disk--2 253:8 0 4M 0 lvm
└─pve-data_tdata 253:5 0 314.3G 0 lvm
└─pve-data-tpool 253:6 0 314.3G 0 lvm
├─pve-data 253:7 0 314.3G 1 lvm
└─pve-vm--200--disk--2 253:8 0 4M 0 lvm
nvme1n1 259:4 0 931.5G 0 disk
├─ssd1tb-ssd1tb_tmeta 253:0 0 9.3G 0 lvm
│ └─ssd1tb-ssd1tb-tpool 253:9 0 912.8G 0 lvm
│ ├─ssd1tb-ssd1tb 253:10 0 912.8G 1 lvm
│ ├─ssd1tb-vm--201--disk--0 253:11 0 100G 0 lvm
│ ├─ssd1tb-vm--200--disk--0 253:12 0 128G 0 lvm
│ ├─ssd1tb-vm--205--disk--0 253:13 0 100G 0 lvm
│ ├─ssd1tb-vm--204--disk--0 253:14 0 20G 0 lvm
│ ├─ssd1tb-vm--203--disk--0 253:15 0 15G 0 lvm
│ ├─ssd1tb-vm--202--disk--0 253:16 0 4G 0 lvm
│ ├─ssd1tb-vm--210--disk--0 253:17 0 20G 0 lvm
│ ├─ssd1tb-vm--207--disk--0 253:18 0 3G 0 lvm
│ ├─ssd1tb-vm--208--disk--0 253:19 0 50G 0 lvm
│ └─ssd1tb-vm--206--disk--0 253:20 0 16G 0 lvm
└─ssd1tb-ssd1tb_tdata 253:1 0 912.8G 0 lvm
└─ssd1tb-ssd1tb-tpool 253:9 0 912.8G 0 lvm
├─ssd1tb-ssd1tb 253:10 0 912.8G 1 lvm
├─ssd1tb-vm--201--disk--0 253:11 0 100G 0 lvm
├─ssd1tb-vm--200--disk--0 253:12 0 128G 0 lvm
├─ssd1tb-vm--205--disk--0 253:13 0 100G 0 lvm
├─ssd1tb-vm--204--disk--0 253:14 0 20G 0 lvm
├─ssd1tb-vm--203--disk--0 253:15 0 15G 0 lvm
├─ssd1tb-vm--202--disk--0 253:16 0 4G 0 lvm
├─ssd1tb-vm--210--disk--0 253:17 0 20G 0 lvm
├─ssd1tb-vm--207--disk--0 253:18 0 3G 0 lvm
├─ssd1tb-vm--208--disk--0 253:19 0 50G 0 lvm
└─ssd1tb-vm--206--disk--0 253:20 0 16G 0 lvm
pveversion for good mesure
Code:
root@pve-server:/# pveversion
pve-manager/8.0.4/d258a813cfa6b390 (running kernel: 6.2.16-14-pve)
So why am I running out of space when there is seemingly 94GB of space allocated for root partition while only 7.5GB is in use?
Is there a way to fix this without reinstalling Proxmox? I kinda want to avoid as much downtime as possible.
Thanks
Last edited: