Zfs Snapshot fails - out of space

yena

Member
Nov 18, 2011
295
2
18
Hello,
on my server i can't do new snapshot:

-----------------------------------------------------------------------
root@nodo5:~# zfs snapshot rpool/KVM/vm-105-disk-1@snap1
cannot create snapshot 'rpool/KVM/vm-105-disk-1@snap1': out of space
root@nodo5:~#
root@nodo5:~# zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
rpool 1.63T 398G 73K /rpool
rpool/BACKUP 57.5K 398G 57.5K /rpool/BACKUP
rpool/KVM 1.62T 398G 57.5K /rpool/KVM
rpool/KVM/vm-105-disk-1 1.62T 1.55T 470G -
rpool/KVM/vm-105-disk-1@__base__ 4.01G - 7.86G -
rpool/LXC 57.5K 398G 57.5K /rpool/LXC
rpool/ROOT 2.51G 398G 96K /rpool/ROOT
rpool/ROOT/pve-1 2.51G 398G 2.51G /
rpool/data 96K 398G 96K /rpool/data
rpool/swap 8.50G 406G 1.35G -

-----------------------------------------------------------------------

But seem i have 1.55T ..
Any idea ?

Thanks!
 
Jan 16, 2018
165
27
28
your pool is full :

rpool 1.63T 398G 73K /rpool
rpool/KVM 1.62T 398G 57.5K /rpool/KVM

one hint: never fill up a COW Filesystem like ZFS, 100% full is really bad, even no deletion is then possible, for good performance better keep it under 70-80 %
 

yena

Member
Nov 18, 2011
295
2
18
your pool is full :

rpool 1.63T 398G 73K /rpool
rpool/KVM 1.62T 398G 57.5K /rpool/KVM

one hint: never fill up a COW Filesystem like ZFS, 100% full is really bad, even no deletion is then possible, for good performance better keep it under 70-80 %
I see 398G free, where do you see is full ?
Thanks
 

yena

Member
Nov 18, 2011
295
2
18
Please also post (inside CODE tags for better readability)

Code:
zpool list
root@nodo5:~# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 2.09T 480G 1.62T - 34% 22% 1.00x ONLINE -


FREE: 1.62T
 

LnxBil

Well-Known Member
Feb 21, 2015
4,078
387
83
Germany
Thank you for NOT using CODE tags - just format it in a non-proportional font does not work well with spaces as you demonstrated.

Now, please also post

Code:
zfs get all rpool/KVM/vm-105-disk-1
 

yena

Member
Nov 18, 2011
295
2
18
Thank you for NOT using CODE tags - just format it in a non-proportional font does not work well with spaces as you demonstrated.

Now, please also post

Code:
zfs get all rpool/KVM/vm-105-disk-1

root@nodo5:~# zfs get all rpool/KVM/vm-105-disk-1
NAME PROPERTY VALUE SOURCE
rpool/KVM/vm-105-disk-1 type volume -
rpool/KVM/vm-105-disk-1 creation Thu Aug 3 15:16 2017 -
rpool/KVM/vm-105-disk-1 used 1.62T -
rpool/KVM/vm-105-disk-1 available 1.55T -
rpool/KVM/vm-105-disk-1 referenced 470G -
rpool/KVM/vm-105-disk-1 compressratio 1.41x -
rpool/KVM/vm-105-disk-1 reservation none default
rpool/KVM/vm-105-disk-1 volsize 1.56T local
rpool/KVM/vm-105-disk-1 volblocksize 8K default
rpool/KVM/vm-105-disk-1 checksum on default
rpool/KVM/vm-105-disk-1 compression on inherited from rpool
rpool/KVM/vm-105-disk-1 readonly off default
rpool/KVM/vm-105-disk-1 createtxg 887 -
rpool/KVM/vm-105-disk-1 copies 1 default
rpool/KVM/vm-105-disk-1 refreservation 1.61T local
rpool/KVM/vm-105-disk-1 guid 4327597119344679116 -
rpool/KVM/vm-105-disk-1 primarycache all default
rpool/KVM/vm-105-disk-1 secondarycache all default
rpool/KVM/vm-105-disk-1 usedbysnapshots 4.01G -
rpool/KVM/vm-105-disk-1 usedbydataset 470G -
rpool/KVM/vm-105-disk-1 usedbychildren 0B -
rpool/KVM/vm-105-disk-1 usedbyrefreservation 1.16T -
rpool/KVM/vm-105-disk-1 logbias latency default
rpool/KVM/vm-105-disk-1 dedup off default
rpool/KVM/vm-105-disk-1 mlslabel none default
rpool/KVM/vm-105-disk-1 sync standard inherited from rpool
rpool/KVM/vm-105-disk-1 refcompressratio 1.41x -
rpool/KVM/vm-105-disk-1 written 466G -
rpool/KVM/vm-105-disk-1 logicalused 658G -
rpool/KVM/vm-105-disk-1 logicalreferenced 650G -
rpool/KVM/vm-105-disk-1 volmode default default
rpool/KVM/vm-105-disk-1 snapshot_limit none default
rpool/KVM/vm-105-disk-1 snapshot_count none default
rpool/KVM/vm-105-disk-1 snapdev hidden default
rpool/KVM/vm-105-disk-1 context none default
rpool/KVM/vm-105-disk-1 fscontext none default
rpool/KVM/vm-105-disk-1 defcontext none default
rpool/KVM/vm-105-disk-1 rootcontext none default
rpool/KVM/vm-105-disk-1 redundant_metadata all default

Thanks!
 

LnxBil

Well-Known Member
Feb 21, 2015
4,078
387
83
Germany
Many Thanks, i make a full backup than i try!
Very good, but for unsettings refreservation, you do not need to do that. You're just editing zfs metadata.

Do you have the "thin provisioning" check box activated in your PVE storage configuration?
 

yena

Member
Nov 18, 2011
295
2
18
Very good, but for unsettings refreservation, you do not need to do that. You're just editing zfs metadata.

Do you have the "thin provisioning" check box activated in your PVE storage configuration?
No, is not flagged, i have noticed it only now! .. i think i have to flag it, right ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!