Hi guys,
I wanted to make snapshots of one of my VMs. However, even though my zpool has still enough space left (so I thought), I cannot do any snapshots. Proxmox aborts with the error "cannot snapshot: out of space". I know that other users had similar issues, however I don't understand it in my case.
Info about my zpool:
and free space:
The VM I want to make a snapshot of is VM104, whose image is on the "tank". I don't understand why my tank is already almost full. I always thought the snapshots don't consume memory; e.g. on my NAS, which also runs ZFS, I have 2 TB used out of 6 TB, and I can make hundreds of snapshots from that data. What is the difference?
I wanted to make snapshots of one of my VMs. However, even though my zpool has still enough space left (so I thought), I cannot do any snapshots. Proxmox aborts with the error "cannot snapshot: out of space". I know that other users had similar issues, however I don't understand it in my case.
Info about my zpool:
Code:
root@pve1:~# zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
fass 6.71T 2.07T 96K /fass
fass/vm-104-disk-0 6.71T 6.20T 2.58T -
fass/vm-104-disk-0@now 0B - 2.58T -
fass/vm-113-disk-1 3.10M 2.07T 2.58T -
tank 13.6T 12.9G 33.6G /tank
tank/subvol-101-disk-0 1.91G 2.09G 1.91G /tank/subvol-101-disk-0
tank/subvol-101-disk-2 65.5G 12.9G 65.5G /tank/subvol-101-disk-2
tank/subvol-105-disk-0 1.01G 1.04G 984M /tank/subvol-105-disk-0
tank/subvol-105-disk-0@Inital 50.1M - 734M -
tank/subvol-107-disk-0 993M 7.03G 993M /tank/subvol-107-disk-0
tank/subvol-108-disk-0 5.58G 10.4G 5.58G /tank/subvol-108-disk-0
tank/subvol-110-disk-0 14.8G 12.9G 11.8G /tank/subvol-110-disk-0
tank/subvol-110-disk-0@before_upgrade 54.1M - 10.1G -
tank/subvol-110-disk-0@gitea_1_9 785M - 10.9G -
tank/subvol-110-disk-0@gitea_1_9_0 5.31M - 9.94G -
tank/subvol-111-disk-0 402M 12.9G 402M /tank/subvol-111-disk-0
tank/subvol-113-disk-0 51.6M 972M 51.6M /tank/subvol-113-disk-0
tank/vm-100-disk-0 967G 630G 348G -
tank/vm-100-disk-0@snap01 1.75G - 348G -
tank/vm-102-disk-0 51.6G 28.8G 35.6G -
tank/vm-104-disk-0 957G 497G 339G -
tank/vm-104-disk-0@Clone 9.40G - 35.3G -
tank/vm-104-disk-0@repair02 89.5G - 341G -
tank/vm-104-disk-0@manual 1.02G - 341G -
tank/vm-104-disk-1 11.5T 6.06T 5.42T -
tank/vm-104-disk-1@Clone 81.4K - 81.4K -
tank/vm-104-disk-1@repair02 9.35G - 5.42T -
tank/vm-104-disk-1@manual 81.3M - 5.42T -
tank/vm-106-disk-0 68.7G 14.2G 60.8G -
tank/vm-106-disk-0@setup 6.55G - 11.9G -
and free space:
Code:
root@pve1:~# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
fass 18.1T 5.16T 13.0T - - 4% 28% 1.00x ONLINE -
tank 29T 13.3T 15.7T - - 6% 45% 1.00x ONLINE -
Code:
root@pve1:~# df -hT
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 32G 0 32G 0% /dev
tmpfs tmpfs 6.3G 1.6M 6.3G 1% /run
/dev/sde2 ext4 85G 4.4G 77G 6% /
tmpfs tmpfs 32G 60M 32G 1% /dev/shm
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/sde1 vfat 511M 132K 511M 1% /boot/efi
tank zfs 47G 34G 13G 73% /tank
tank/subvol-108-disk-0 zfs 16G 5.6G 11G 35% /tank/subvol-108-disk-0
tank/subvol-101-disk-0 zfs 4.0G 2.0G 2.1G 48% /tank/subvol-101-disk-0
tank/subvol-107-disk-0 zfs 8.0G 994M 7.1G 13% /tank/subvol-107-disk-0
tank/subvol-111-disk-0 zfs 14G 403M 13G 3% /tank/subvol-111-disk-0
tank/subvol-105-disk-0 zfs 2.0G 984M 1.1G 49% /tank/subvol-105-disk-0
tank/subvol-101-disk-2 zfs 79G 66G 13G 84% /tank/subvol-101-disk-2
tank/subvol-110-disk-0 zfs 25G 12G 13G 48% /tank/subvol-110-disk-0
fass zfs 2.1T 128K 2.1T 1% /fass
tank/subvol-113-disk-0 zfs 1.0G 52M 973M 6% /tank/subvol-113-disk-0
/dev/zd144 ext4 4.0T 2.7T 1.2T 71% /mnt/atmosphere
/dev/fuse fuse 30M 32K 30M 1% /etc/pve
tmpfs tmpfs 6.3G 0 6.3G 0% /run/user/0
The VM I want to make a snapshot of is VM104, whose image is on the "tank". I don't understand why my tank is already almost full. I always thought the snapshots don't consume memory; e.g. on my NAS, which also runs ZFS, I have 2 TB used out of 6 TB, and I can make hundreds of snapshots from that data. What is the difference?
Last edited: