Help: Pools is full

ixproxmox

Renowned Member
Nov 25, 2015
76
2
73
Under different places, it says my two pools have lot of space. But see here, system is full!

I have a zfs pool and a raid pool. It shows status OK on both and free space in giu with several TB free space. But all my vms on zfs volume has yellow mark and I can't do anything with them.

I have no idea what is going on... I can't move the disk on the zfs pool outside, because it is to full. And I can't delete anything it seems.

Filesystem Size Used Avail Use% Mounted on
udev 95G 0 95G 0% /dev
tmpfs 19G 2.4M 19G 1% /run
rpool/ROOT/pve-1 2.0G 2.0G 0 100% /
tmpfs 95G 52M 95G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
rpool 128K 128K 0 100% /rpool
rpool/ROOT 128K 128K 0 100% /rpool/ROOT
rpool/data 128K 128K 0 100% /rpool/data
/dev/fuse 128M 40K 128M 1% /etc/pve
vm 3.5T 256K 3.5T 1% /vm
vm/subvol-120-disk-0 600G 415G 186G 70% /vm/subvol-120-disk-0
vm/subvol-505-disk-0 8.0G 897M 7.2G 11% /vm/subvol-505-disk-0
vm/subvol-500-disk-0 300G 1.1G 299G 1% /vm/subvol-500-disk-0
vm/subvol-503-disk-0 10G 1018M 9.1G 10% /vm/subvol-503-disk-0


root@p1:/var/log# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 861G 0B 104K /rpool
rpool/ROOT 1.98G 0B 96K /rpool/ROOT
rpool/ROOT/pve-1 1.98G 0B 1.98G /
rpool/data 859G 0B 96K /rpool/data
rpool/data/vm-125-disk-0 172G 0B 172G -
rpool/data/vm-130-disk-0 89.9G 0B 89.9G -
rpool/data/vm-131-disk-0 13.0G 0B 13.0G -
rpool/data/vm-132-disk-0 21.7G 0B 21.7G -
rpool/data/vm-136-disk-0 1.46G 0B 1.46G -
rpool/data/vm-304-disk-0 496G 0B 495G -
rpool/data/vm-304-state-BeforeCpanel 624M 0B 624M -
rpool/data/vm-510-disk-0 55.2G 0B 55.2G -
rpool/data/vm-515-disk-0 8.05G 0B 6.81G -
rpool/data/vm-515-state-StartInstall 1.01G 0B 1.01G -
vm 1.46T 3.49T 174K /vm
vm/subvol-120-disk-0 415G 185G 415G /vm/subvol-120-disk-0
vm/subvol-500-disk-0 1.01G 299G 1.01G /vm/subvol-500-disk-0
vm/subvol-503-disk-0 1017M 9.01G 1017M /vm/subvol-503-disk-0
vm/subvol-505-disk-0 896M 7.12G 896M /vm/subvol-505-disk-0
vm/vm-121-disk-0 86.9G 3.54T 39.3G -
vm/vm-121-state-BeforeUpdate 15.6G 3.51T 1.39G -
vm/vm-122-disk-0 47.5G 3.53T 6.92G -
vm/vm-126-disk-0 32.7G 3.50T 22.5G -
vm/vm-135-disk-0 14.9G 3.50T 2.92G -
vm/vm-300-disk-0 14.9G 3.50T 2.39G -
vm/vm-301-disk-0 327G 3.79T 26.5G -
vm/vm-302-disk-0 89.2G 3.51T 67.1G -
vm/vm-303-disk-0 446G 3.68T 258G -
 
Under different places, it says my two pools have lot of space. But see here, system is full!

I have a zfs pool and a raid pool. It shows status OK on both and free space in giu with several TB free space. But all my vms on zfs volume has yellow mark and I can't do anything with them.

I have no idea what is going on... I can't move the disk on the zfs pool outside, because it is to full. And I can't delete anything it seems.

Filesystem Size Used Avail Use% Mounted on
udev 95G 0 95G 0% /dev
tmpfs 19G 2.4M 19G 1% /run
rpool/ROOT/pve-1 2.0G 2.0G 0 100% /
tmpfs 95G 52M 95G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
rpool 128K 128K 0 100% /rpool
rpool/ROOT 128K 128K 0 100% /rpool/ROOT
rpool/data 128K 128K 0 100% /rpool/data
/dev/fuse 128M 40K 128M 1% /etc/pve
vm 3.5T 256K 3.5T 1% /vm
vm/subvol-120-disk-0 600G 415G 186G 70% /vm/subvol-120-disk-0
vm/subvol-505-disk-0 8.0G 897M 7.2G 11% /vm/subvol-505-disk-0
vm/subvol-500-disk-0 300G 1.1G 299G 1% /vm/subvol-500-disk-0
vm/subvol-503-disk-0 10G 1018M 9.1G 10% /vm/subvol-503-disk-0


root@p1:/var/log# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 861G 0B 104K /rpool
rpool/ROOT 1.98G 0B 96K /rpool/ROOT
rpool/ROOT/pve-1 1.98G 0B 1.98G /
rpool/data 859G 0B 96K /rpool/data
rpool/data/vm-125-disk-0 172G 0B 172G -
rpool/data/vm-130-disk-0 89.9G 0B 89.9G -
rpool/data/vm-131-disk-0 13.0G 0B 13.0G -
rpool/data/vm-132-disk-0 21.7G 0B 21.7G -
rpool/data/vm-136-disk-0 1.46G 0B 1.46G -
rpool/data/vm-304-disk-0 496G 0B 495G -
rpool/data/vm-304-state-BeforeCpanel 624M 0B 624M -
rpool/data/vm-510-disk-0 55.2G 0B 55.2G -
rpool/data/vm-515-disk-0 8.05G 0B 6.81G -
rpool/data/vm-515-state-StartInstall 1.01G 0B 1.01G -
vm 1.46T 3.49T 174K /vm
vm/subvol-120-disk-0 415G 185G 415G /vm/subvol-120-disk-0
vm/subvol-500-disk-0 1.01G 299G 1.01G /vm/subvol-500-disk-0
vm/subvol-503-disk-0 1017M 9.01G 1017M /vm/subvol-503-disk-0
vm/subvol-505-disk-0 896M 7.12G 896M /vm/subvol-505-disk-0
vm/vm-121-disk-0 86.9G 3.54T 39.3G -
vm/vm-121-state-BeforeUpdate 15.6G 3.51T 1.39G -
vm/vm-122-disk-0 47.5G 3.53T 6.92G -
vm/vm-126-disk-0 32.7G 3.50T 22.5G -
vm/vm-135-disk-0 14.9G 3.50T 2.92G -
vm/vm-300-disk-0 14.9G 3.50T 2.39G -
vm/vm-301-disk-0 327G 3.79T 26.5G -
vm/vm-302-disk-0 89.2G 3.51T 67.1G -
vm/vm-303-disk-0 446G 3.68T 258G -
 

Attachments

  • pools.png
    pools.png
    36.3 KB · Views: 7
According to zfs list your "rpool" is full but your "vm" pool not.

For more info please post the output of zpool list -v and zfs list -o space in CODE-tags.
 
Code:
oot@p1:/etc/pve# zpool list -v
NAME                                                      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool                                                     888G   861G  27.4G        -         -    57%    96%  1.00x    ONLINE  -
  mirror-0                                                888G   861G  27.4G        -         -    57%  96.9%      -    ONLINE
    ata-SAMSUNG_MZ7LH960HAJR-00005_S45NNE0N203351-part3   893G      -      -        -         -      -      -      -    ONLINE
    ata-INTEL_SSDSC2KB960G8_PHYF00730096960CGN-part3      893G      -      -        -         -      -      -      -    ONLINE
vm                                                       6.98T  1.14T  5.85T        -         -     5%    16%  1.00x    ONLINE  -
  raidz1-0                                               6.98T  1.14T  5.85T        -         -     5%  16.3%      -    ONLINE
    nvme-SAMSUNG_MZ1LB1T9HALS-00007_S436NC0NB09074       1.75T      -      -        -         -      -      -      -    ONLINE
    nvme-SAMSUNG_MZ1LB1T9HALS-00007_S436NA0MA01415_1     1.75T      -      -        -         -      -      -      -    ONLINE
    nvme-SAMSUNG_MZ1LB1T9HALS-00007_S436NA0N400682_1     1.75T      -      -        -         -      -      -      -    ONLINE
    nvme-SAMSUNG_MZ1LB1T9HALS-00007_S436NA0N400681       1.75T      -      -        -         -      -      -      -    ONLINE


Code:
root@p1:/etc/pve# zfs list -o space
NAME                                  AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
rpool                                    0B   861G        0B    104K             0B       861G
rpool/ROOT                               0B  1.90G        0B     96K             0B      1.90G
rpool/ROOT/pve-1                         0B  1.90G        0B   1.90G             0B         0B
rpool/data                               0B   859G        0B     96K             0B       859G
rpool/data/vm-125-disk-0                 0B   172G        0B    172G             0B         0B
rpool/data/vm-130-disk-0                 0B  89.9G        0B   89.9G             0B         0B
rpool/data/vm-131-disk-0                 0B  13.0G        0B   13.0G             0B         0B
rpool/data/vm-132-disk-0                 0B  21.7G        0B   21.7G             0B         0B
rpool/data/vm-136-disk-0                 0B  1.46G        0B   1.46G             0B         0B
rpool/data/vm-304-disk-0                 0B   496G      932M    495G             0B         0B
rpool/data/vm-304-state-BeforeCpanel     0B   624M        0B    624M             0B         0B
rpool/data/vm-510-disk-0                 0B  55.2G        0B   55.2G             0B         0B
rpool/data/vm-515-disk-0                 0B  8.05G     1.24G   6.81G             0B         0B
rpool/data/vm-515-state-StartInstall     0B  1.01G        0B   1.01G             0B         0B
vm                                    3.49T  1.46T        0B    174K             0B      1.46T
vm/subvol-120-disk-0                   185G   415G        0B    415G             0B         0B
vm/subvol-500-disk-0                   299G  1.01G        0B   1.01G             0B         0B
vm/subvol-503-disk-0                  9.01G  1017M        0B   1017M             0B         0B
vm/subvol-505-disk-0                  7.12G   896M        0B    896M             0B         0B
vm/vm-121-disk-0                      3.54T  86.9G      331M   39.3G          47.3G         0B
vm/vm-121-state-BeforeUpdate          3.51T  15.6G        0B   1.39G          14.2G         0B
vm/vm-122-disk-0                      3.53T  47.5G        0B   6.92G          40.6G         0B
vm/vm-126-disk-0                      3.50T  32.7G        0B   22.5G          10.2G         0B
vm/vm-135-disk-0                      3.50T  14.9G        0B   2.92G          11.9G         0B
vm/vm-300-disk-0                      3.50T  14.9G        0B   2.39G          12.5G         0B
vm/vm-301-disk-0                      3.79T   327G        0B   26.5G           300G         0B
vm/vm-302-disk-0                      3.51T  89.2G        0B   67.1G          22.1G         0B
vm/vm-303-disk-0                      3.68T   446G        0B    258G           188G         0B
root@p1:/etc/pve#
 
I have VMs in both pools, it is my zfs-vm's that are in trouble...

I could even delete "Virtual Machine 125" that are in that pool with 300 GB disks, but I can't even remove it's protection due to no storage space. So can't delete anything either. I couldn't find any big files in the "/" root file system, only a few log files, but not enough. No isos, no backups. I tried to delete the snapshot, but can't do that either due to error msg in the GUI about not enough space.

Before this, I did grow a pure xfs partition inside a kvm disk and everything looked ok...
 
Last edited:
You could destroy those two snapshots on the rpool to get 2GB back and then hope its enough to move those disks to your VM pool.
 
You could destroy those two snapshots on the rpool to get 2GB back and then hope its enough to move those disks to your VM pool.
But how :)

The GUI doesn't let me and rm -Rf rpool/data/vm-304-state-BeforeCpanel doesn't delete them either.

Code:
root@p1:/etc/pve#  qm delsnapshot 304 BeforeCpanel

unable to create output file '/var/log/pve/tasks/0/UPID:p1:001ACB8D:0E364084:65687DE0:qmdelsnapshot:304:root@pam:' - No space left on device
 
Last edited:
When working with ZFS use the "zfs" and "zpool" commands. Like "zfs destroy pool/zvol@snapshot".
 
When working with ZFS use the "zfs" and "zpool" commands. Like "zfs destroy pool/zvol@snapshot".
Code:
root@p1:/etc/pve# zfs destroy -f rpool/data/vm-125-disk-0
cannot destroy 'rpool/data/vm-125-disk-0': dataset is busy

root@p1:/etc/pve# qm destroy 125
can't remove VM 125 - protection mode enabled
 
Looks like I managed to free up a bit. But still wonder why this happened.. but that I need to analyze later.

It stills shows yellow on my vms on zfs in the GUI for some reason...

Code:
root@p1:/etc/pve# kill -9397049
-bash: kill: 9397049: invalid signal specification
root@p1:/etc/pve# kill -9 397049
root@p1:/etc/pve# fuser -am /dev/rpool/data/vm-125-disk-0
/dev/zd224:
root@p1:/etc/pve# qm destroy 125
can't remove VM 125 - protection mode enabled
root@p1:/etc/pve# zfs destroy -f rpool/data/vm-125-disk-0
root@p1:/etc/pve# zfs destroy -f rpool/data/vm-125-disk-0
cannot open 'rpool/data/vm-125-disk-0': dataset does not exist
root@p1:/etc/pve# qm destroy 125
can't remove VM 125 - protection mode enabled

root@p1:/etc/pve# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                   95G     0   95G   0% /dev
tmpfs                  19G   27M   19G   1% /run
rpool/ROOT/pve-1      175G  1.9G  173G   2% /
tmpfs                  95G   46M   95G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
rpool                 173G  128K  173G   1% /rpool
rpool/ROOT            173G  128K  173G   1% /rpool/ROOT
rpool/data            173G  128K  173G   1% /rpool/data
/dev/fuse             128M   40K  128M   1% /etc/pve
vm                    3.5T  256K  3.5T   1% /vm
vm/subvol-120-disk-0  600G  415G  186G  70% /vm/subvol-120-disk-0
vm/subvol-505-disk-0  8.0G  897M  7.2G  11% /vm/subvol-505-disk-0
vm/subvol-500-disk-0  300G  1.1G  299G   1% /vm/subvol-500-disk-0
vm/subvol-503-disk-0   10G 1018M  9.1G  10% /vm/subvol-503-disk-0
//77.40.236.80/home    33T   12T   21T  36% /mnt/pve/mars-iso
tmpfs                  19G     0   19G   0% /run/user/0
root@p1:/etc/pve# fuser -am /dev/rpool/data/vm-125-disk-0


This is what I get when I try to remove the checkmark for protected mode on that vm where I deleted the disk:
unable to open file '/etc/pve/nodes/p1/qemu-server/125.conf.tmp.1708607' - Input/output error (500)
 
Last edited:
Code:
root@p1:/etc/pve# zfs destroy -f rpool/data/vm-125-disk-0
cannot destroy 'rpool/data/vm-125-disk-0': dataset is busy

root@p1:/etc/pve# qm destroy 125
can't remove VM 125 - protection mode enabled
Make sure to destroy the snapshot and not the dataset unless you got backups.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!