No Space Left on Device

aceonce2007

New Member
Feb 2, 2022
7
0
1
50
Please help...I'm not able to log into WebUI, can't start lxc container due to error "no space left on device"

Output

root@pve:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 24G 0 24G 0% /dev
tmpfs 4.7G 429M 4.3G 9% /run
/dev/mapper/pve-root 9.8G 9.8G 0 100% /
tmpfs 24G 37M 24G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sda2 511M 312K 511M 1% /boot/efi
/dev/fuse 128M 36K 128M 1% /etc/pve
tmpfs 4.7G 0 4.7G 0% /run/user/0

root@pve:~# lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
sda
├─sda1
├─sda2 vfat FAT32 AC2A-96C2 510.7M 0% /boot/efi
└─sda3 LVM2_member LVM2 001 X8bpUR-smRN-IZSV-qzwd-lCoz-3W8t-Ohi8Xm
├─pve-swap swap 1 f789f919-1bbe-4948-ba04-1aaf2ce15c8a [SWAP]
├─pve-root ext4 1.0 95552b33-d6da-483d-992b-9f1ed8a23540 0 100% /
├─pve-data_tmeta
│ └─pve-data-tpool
│ ├─pve-data
│ ├─pve-vm--101--disk--0
│ ├─pve-vm--101--disk--1
│ ├─pve-vm--103--disk--0
│ ├─pve-vm--103--disk--1
│ ├─pve-vm--102--disk--0
│ ├─pve-vm--105--disk--0
│ ├─pve-vm--105--disk--1
│ └─pve-vm--100--disk--1 ext4 1.0 611979e5-4a82-4c5f-a0b4-c76ac6cb0a94
└─pve-data_tdata
└─pve-data-tpool
├─pve-data
├─pve-vm--101--disk--0
├─pve-vm--101--disk--1
├─pve-vm--103--disk--0
├─pve-vm--103--disk--1
├─pve-vm--102--disk--0
├─pve-vm--105--disk--0
├─pve-vm--105--disk--1
└─pve-vm--100--disk--1 ext4 1.0 611979e5-4a82-4c5f-a0b4-c76ac6cb0a94
sdb
└─sdb1 ext4 1.0 storageprox c1435746-5b50-4f18-b6fd-e9cbbabde8c5

root@pve:~# du -sh /
du: cannot access '/var/lib/lxcfs/cgroup': Input/output error
du: cannot access '/proc/136117/task/136117/fd/3': No such file or directory
du: cannot access '/proc/136117/task/136117/fdinfo/3': No such file or directory
du: cannot access '/proc/136117/fd/4': No such file or directory
du: cannot access '/proc/136117/fdinfo/4': No such file or directory
11G /
 
Here's the output. I think the problematic directory is 9.8G but I don't know how to free space up. Help please

root@pve:~# du -h -d1 -x /
280M /boot
8.0K /16TB_SSD
24K /tmp
4.0K /srv
8.0K /media
2.1G /var
4.0K /opt
27M /root
5.3M /etc
3.9G /mnt
4.0K /home
16K /lost+found
3.6G /usr
9.8G /
 
root@pve:~# du -h -d1 -x /mnt
3.9G /mnt/data
4.0K /mnt/hostrun
4.0K /mnt/vzsnap0
3.9G /mnt

Looks like most space occupied by lxc backup

root@pve:/mnt/data/backup/dump# ls -l
total 4014600
-rw-r--r-- 1 root root 855 Jul 31 03:04 vzdump-lxc-100-2022_07_31-03_00_02.log
-rw-r--r-- 1 root root 4110919897 Jul 31 03:04 vzdump-lxc-100-2022_07_31-03_00_02.tar.zst
-rw-r--r-- 1 root root 1297 Jul 31 03:05 vzdump-qemu-103-2022_07_31-03_04_23.log
-rw-r--r-- 1 root root 1525 Jul 31 03:05 vzdump-qemu-104-2022_07_31-03_05_07.log

what should I do to free space up?
 
I can't seem to find where my backup are since I can't log into webUI. I spent last couple of days to clear up space w/o much success and I accidentally deleted everything under /var/lib/vz

I don't want to start from scratch obviously :) all I need now is to get to all my backups VMs/lxc then reinstall proxmox fresh. How can I go about to do so?

root@pve:/mnt/data/backup/dump# qm list
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
101 TrueNAS running 25600 20.00 1188
102 MacBigSur stopped 10240 320.00 0
103 Windows-11 stopped 8192 500.00 0
105 haos8.5 running 8192 96.00 1432

root@pve:/mnt/data/backup/dump# pct list
vm 100 - unable to parse value of 'privileged' - unknown setting 'privileged'
VMID Status Lock Name
100 stopped Docker
 
I've moved that file off the server, space free up but i still cannot log into webUI. Tried to start lxc docker, input/output error

root@pve:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 24G 0 24G 0% /dev
tmpfs 4.7G 469M 4.3G 10% /run
/dev/mapper/pve-root 9.8G 6.0G 3.4G 65% /
tmpfs 24G 46M 24G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sda2 511M 312K 511M 1% /boot/efi
/dev/fuse 128M 36K 128M 1% /etc/pve
tmpfs 4.7G 0 4.7G 0% /run/user/0


root@pve:~# pct start 100
vm 100 - unable to parse value of 'privileged' - unknown setting 'privileged'
explicitly configured lxc.apparmor.profile overrides the following settings: features:nesting
root@pve:~# pct list
vm 100 - unable to parse value of 'privileged' - unknown setting 'privileged'
VMID Status Lock Name
100 running Docker
root@pve:~# cd /etc/pve/lxc
root@pve:/etc/pve/lxc# nano 100.conf
root@pve:/etc/pve/lxc#

If I remember correctly, there're 3 volumes in my system

local (directory)
local lvm
lvmthin

How do I get to those volumes to retrieve backups?
 
I've moved that file off the server, space free up but i still cannot log into webUI
did you try to reboot?

your data would be easier to read if you used CODE tags.
Tried to start lxc docker, input/output error
i dont see IO error in your output

vm 100 - unable to parse value of 'privileged' - unknown setting 'privileged'
this indicates that you have incorrect setting that is preventing container from starting. You can examine your config via "pct 100 config" or directly in /etc/pve/lxc/100*

local (directory)
its wherever it points to, you can check /etc/pve/storage.cfg or search for familiar files. Its probably pointing to the data you already accidentally deleted.
local lvm
lvmthin
in standard installation, usually, only one is present.
─pve-vm--101--disk--0
├─pve-vm--101--disk--1
├─pve-vm--103--disk--0
├─pve-vm--103--disk--1
├─pve-vm--102--disk--0
├─pve-vm--105--disk--0
├─pve-vm--105--disk--1
└─pve-vm--100--disk--1 ext4 1.0 611979e5-4a82-4c5f-a0b4-c76ac6cb0a94
this are your LVM disks, its in raw format. You can try to dd them, mount them and copy data off, etc. There are many guides on the net on how to access LVM volumes.

Good luck and good night.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi, just wanted to add something that I experienced (actually, created a login for just this post) - in case someone has the same problem I had:

TL;DR: Check whether you can reclaim disk space by removing LXC container data stores after unmounting and ZFS destroying them!

What happened:
- I had issues with creating an LXC container by script (OneDev; doesn't matter, but still nice to capture for posterity)
- I kept deleting and re-creating them - trying to get it to work
- Then after some time, I got these 100% full error messages, and nothing I did worked (removing logs, templates, etc.) -> / and individual LXC roots stayed full

Couple of things that I was doing wrong probably:
- I stored the LXC container data on the root Proxmox drive (on /rpool) -> should have been my separate ZFS /tank pool
- I don't know what mistakes I made deleting the LXC containers that didn't work -> I remember pressing Ctrl-C during creation; apparently that doesn't delete it, although you don't see the containers in the UI anymore

In any case, I found a solution by doing the following after some investigation:
- For some reason, deleting the files in /rpool/data/subvol-blabla-10x-disk-y of those LXC containers I created and thought deleted, didn't do anything to reclaim the disk space
- However, when I deleted these filesystems (they're ZFS filesystems) including snapshotted children, _then_ I got the disk space back!

Maybe that's ridiculously logical for a seasoned sysadmin. For me, however, it took some time - perhaps this experience can benefit someone else as well.
 
Ok. Yeah. No. Disk is full again... Sorry - not a real solution

EDIT: Well, did the same as above, but then for a VM I had created (instead of the LXC containers) - for now it works again (enough free space). Let's see for how long...
 
Last edited: