Full Storage - Unable to access WebUI.

glasderg

New Member
Jul 31, 2022
2
0
1
First off, I am relatively new to proxmox. I am learning as I go.

I'm having an issue with my proxmox.

It crashed, and I was unable to access the webUI.
I ssh'd into the server and noticed my storage was full.
E: Write error - write (28: No space left on device)
I cleared the logs, then I was able to access the webUI but now I am unable to login with the root password.

I can see that rpool/ROOT/pve-1 is 100%, is there any way I can expand the storage to resolve this issue?
Just wondering if anyone can provide me with a little insight with some tips/tricks.
Please remember that I am relatively new, so be gentle/patient with me! :D

I'd really appreciate any help/tips.

Code:
root@pve:/dev# df -h

Filesystem        Size  Used Avail Use% Mounted on

udev               16G     0   16G   0% /dev

tmpfs             3.2G  322M  2.9G  11% /run

rpool/ROOT/pve-1  1.3G  1.3G     0 100% /

tmpfs              16G   40M   16G   1% /dev/shm

tmpfs             5.0M     0  5.0M   0% /run/lock

rpool             128K  128K     0 100% /rpool

rpool/data        128K  128K     0 100% /rpool/data

rpool/ROOT        128K  128K     0 100% /rpool/ROOT

zfs1/ZFS1          16T  2.8G   16T   1% /mnt/ZFS1

zfs1               16T  256K   16T   1% /zfs1

/dev/fuse         128M   20K  128M   1% /etc/pve

tmpfs             3.2G     0  3.2G   0% /run/user/0


root@pve:/dev# zfs list

NAME                       USED  AVAIL     REFER  MOUNTPOINT

rpool                      225G     0B      104K  /rpool

rpool/ROOT                1.23G     0B       96K  /rpool/ROOT

rpool/ROOT/pve-1          1.23G     0B     1.23G  /

rpool/data                 224G     0B       96K  /rpool/data

rpool/data/vm-100-disk-0  11.2G     0B     11.2G  -

rpool/data/vm-101-disk-0   201G     0B      201G  -

rpool/data/vm-101-disk-1  3.34G     0B     3.34G  -

rpool/data/vm-102-disk-0  8.39G     0B     8.39G  -

zfs1                      2.74G  15.4T      205K  /zfs1

zfs1/ZFS1                 2.73G  15.4T     2.73G  /mnt/ZFS1


root@pve:/dev# zfs list -o space

NAME                      AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD

rpool                        0B   225G        0B    104K             0B       225G

rpool/ROOT                   0B  1.23G        0B     96K             0B      1.23G

rpool/ROOT/pve-1             0B  1.23G        0B   1.23G             0B         0B

rpool/data                   0B   224G        0B     96K             0B       224G

rpool/data/vm-100-disk-0     0B  11.2G        0B   11.2G             0B         0B

rpool/data/vm-101-disk-0     0B   201G        0B    201G             0B         0B

rpool/data/vm-101-disk-1     0B  3.34G        0B   3.34G             0B         0B

rpool/data/vm-102-disk-0     0B  8.39G        0B   8.39G             0B         0B

zfs1                      15.4T  2.74G        0B    205K             0B      2.74G

zfs1/ZFS1                 15.4T  2.73G        0B   2.73G             0B         0B
 
Last edited:
First use CODE-tags please. Then your output is readable, especially when posting tables.
Second, always monitor your servers so storages won't get full in the first place.
Third, your root filesystem shares the storage with everything else on that pool, so also your guests. A good idea would be to set quotas, so that guests can't grow that much that your root filesystem runs out of space.
Forth, a ZFS pool should NEVER EVER be filled up to 100%. And not you your root filesystem is 100% full, its the entire pool! Then it becomes read-only and you won't even be able to delete stuff because ZFS is a copy-on-write filesystem that needs to write stuff in order to delete something, which of cause won't work when its already full. Only option then might be to buy new bigger drives and clone the smaller to the bigger disks. And a ZFS pool shouldn't be filled more then 80%, or it will become slow and fragments faster.

First I would look if there are snapshots that could be deleted, but doesn`t look like there are some.
Then I would try to backup a dataset or zvol using "zfs send" and destroy that zvol or dataset to get some free space.
If that doesn`t work you would have to buy new bigger disks (keep in mind that it isn't recommended to use consumer SSDs with ZFS), clone everything on block level (for example using clonezilla) from the full disks to the new bigger disks, then extend the ZFS partition (for example using gparted) and make sure the pool got "autoexpand" set.
 
Last edited:
First use CODE-tags please. Then your output is readable, especially when posting tables.
Second, always monitor your servers so storages won't get full in the first place.
Third, your root filesystem shares the storage with everything else on that pool, so also your guests. A good idea would be to set quotas, so that guests can't grow that much that your root filesystem runs out of space.
Forth, a ZFS pool should NEVER EVER be filled up to 100%. And not you your root filesystem is 100% full, its the entire pool! Then it becomes read-only and you won't even be able to delete stuff because ZFS is a copy-on-write filesystem that needs to write stuff in order to delete something, which of cause won't work when its already full. Only option then might be to buy new bigger drives and clone the smaller to the bigger disks. And a ZFS pool shouldn't be filled more then 80%, or it will become slow and fragments faster.

First I would look if there are snapshots that could be deleted, but doesn`t look like there are some.
Then I would try to backup a dataset or zvol using "zfs send" and destroy that zvol or dataset to get some free space.
If that doesn`t work you would have to buy new bigger disks (keep in mind that it isn't recommended to use consumer SSDs with ZFS), clone everything on block level (for example using clonezilla) from the full disks to the new bigger disks, then extend the ZFS partition (for example using gparted) and make sure the pool got "autoexpand" set.
All good tips. Thank you. I will try and figure out how to implement some of the things you've mentioned.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!