ZFS-ERROR Live-Migration: cannot create snapshot: out of space

Bluemeus

New Member
Apr 19, 2025
6
0
1
Hi all,

i've got a problem migrating a vm in an 2 node cluster from pve02 to pve03. The vm is a pbs 4 with 2 disks (os 30gb & data 500gb). Both disks are located at a local zfs 2 TB nvme disk. the storage is called `nvme-2tb` on both nodes. the vm was configured in HA with a replication job every 30 minutes.

By migrating both nodes to PVE9 and the pbs to PBS4 i've noticed, that the jobs has failed since weeks. The log says, that the replication has failed, because of running out of space.

Code:
2025-12-11 23:12:42 use dedicated network address for sending migration traffic (172.16.10.3)
2025-12-11 23:12:42 starting migration of VM 130 to node 'pve03' (172.16.10.3)
2025-12-11 23:12:42 found local, replicated disk 'nvme-2tb:vm-130-disk-0' (attached)
2025-12-11 23:12:42 found local, replicated disk 'nvme-2tb:vm-130-disk-1' (attached)
2025-12-11 23:12:42 replicating disk images
2025-12-11 23:12:42 start replication job
2025-12-11 23:12:42 guest => VM 130, running => 0
2025-12-11 23:12:42 volumes => nvme-2tb:vm-130-disk-0,nvme-2tb:vm-130-disk-1
2025-12-11 23:12:44 create snapshot '__replicate_130-0_1765491162__' on nvme-2tb:vm-130-disk-0
2025-12-11 23:12:44 create snapshot '__replicate_130-0_1765491162__' on nvme-2tb:vm-130-disk-1
2025-12-11 23:12:44 delete previous replication snapshot '__replicate_130-0_1765491162__' on nvme-2tb:vm-130-disk-0
2025-12-11 23:12:44 end replication job with error: zfs error: cannot create snapshot 'nvme-2tb/vm-130-disk-1@__replicate_130-0_1765491162__': out of space
2025-12-11 23:12:44 ERROR: zfs error: cannot create snapshot 'nvme-2tb/vm-130-disk-1@__replicate_130-0_1765491162__': out of space
2025-12-11 23:12:44 aborting phase 1 - cleanup resources
2025-12-11 23:12:44 ERROR: migration aborted (duration 00:00:02): zfs error: cannot create snapshot 'nvme-2tb/vm-130-disk-1@__replicate_130-0_1765491162__': out of space
TASK ERROR: migration aborted

So i checked and found out, the zfs storage was full with about 90 %. So far i know, that zfs should not be used about 80%, but the second node is just for maintenance and so therefore was no problem seen. So i began to clean up the zfs pool on node 2 (pve03). i also removed the replicated copy of the disks from the pbs vm (130). So right now the free / unused space is about 1.3 TB, more than enough space for 30+500 GB of data.

Code:
oot@pve03:~#  zfs list -r -o name,used,avail,refer,usedbysnapshots,usedbychildren,usedbyrefreservation nvme-2tb
NAME                                   USED  AVAIL  REFER  USEDSNAP  USEDCHILD  USEDREFRESERV
nvme-2tb                               391G  1.37T    96K        0B       391G             0B
nvme-2tb/subvol-110-disk-0            2.57G  23.1G  1.92G      666M         0B             0B
nvme-2tb/vm-101-disk-0                 117G  1.43T  44.6G     11.8G         0B          60.9G
nvme-2tb/vm-101-state-vor_Update      18.6G  1.39T  1.87G        0B         0B          16.7G
nvme-2tb/vm-111-disk-0                49.5G  1.41T  17.0G        0B         0B          32.5G
nvme-2tb/vm-211-disk-0                19.3G  1.39T  7.14G        0B         0B          12.2G
nvme-2tb/vm-211-disk-1                1.02G  1.38T    56K        0B         0B          1.02G
nvme-2tb/vm-212-disk-0                11.4G  1.38T  1.29G        0B         0B          10.2G
nvme-2tb/vm-221-disk-0                17.6G  1.39T  5.44G        0B         0B          12.2G
nvme-2tb/vm-221-disk-1                3.79G  1.38T  1.75G        0B         0B          2.03G
nvme-2tb/vm-221-disk-10               3.05G  1.38T    56K        0B         0B          3.05G
nvme-2tb/vm-221-disk-11               4.06G  1.38T    56K        0B         0B          4.06G
nvme-2tb/vm-221-disk-12               4.06G  1.38T    56K        0B         0B          4.06G
nvme-2tb/vm-221-disk-2                3.79G  1.38T  1.75G        0B         0B          2.03G
nvme-2tb/vm-221-disk-3                3.79G  1.38T  1.75G        0B         0B          2.03G
nvme-2tb/vm-221-disk-4                3.79G  1.38T  1.75G        0B         0B          2.03G
nvme-2tb/vm-221-disk-5                3.78G  1.38T  1.75G        0B         0B          2.03G
nvme-2tb/vm-221-disk-6                3.78G  1.38T  1.75G        0B         0B          2.03G
nvme-2tb/vm-221-disk-7                3.78G  1.38T  1.75G        0B         0B          2.03G
nvme-2tb/vm-221-disk-8                3.78G  1.38T  1.75G        0B         0B          2.03G
nvme-2tb/vm-221-disk-9                3.05G  1.38T    56K        0B         0B          3.05G
nvme-2tb/vm-222-disk-0                15.9G  1.39T  3.70G        0B         0B          12.2G
nvme-2tb/vm-223-disk-0                15.9G  1.39T  3.71G        0B         0B          12.2G
nvme-2tb/vm-301-disk-0                36.1G  1.41T  2.78G      837M         0B          32.5G
nvme-2tb/vm-301-state-preUpdate_25_1  8.62G  1.38T  1.07G        0B         0B          7.55G
nvme-2tb/vm-302-disk-0                32.5G  1.40T  2.80G        0B         0B          29.7G

Unfortunatly i am still not able to migrate the vm from node1 (pve02) to node2 (pve03). The error is still present.

I have done a `zpool trim nvme-2tb` and also a `zpool scrub nvme-2tb` and i rebooted both nodes more that 2 times with any change of error.

So for now, i am out of solutions, why i am starting my first thread here in the proxmox forums. So please, be nice. ;-)


Does anyone has an idea?
Thanks in advance.
 
Please share this from both nodes
Bash:
cat /etc/pve/storage.cfg
zfs list -t all -ospace,refreservation
zpool list
 
Please share this from both nodes
Bash:
cat /etc/pve/storage.cfg
zfs list -t all -ospace,refreservation
zpool list

Node1 (pve02):
Bash:
root@pve02:~# cat /etc/pve/storage.cfg && zfs list -t all -ospace,refreservation && zpool list

dir: local
        path /var/lib/vz
        content vztmpl,iso,backup
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

zfspool: nvme-2tb
        pool nvme-2tb
        content rootdir,images
        mountpoint /nvme-2tb
        nodes pve03,pve02
        sparse 0

nfs: homeserver-nfs
        export /export/Proxmox
        path /mnt/pve/homeserver-nfs
        server 172.16.20.20
        content rootdir,iso,backup,vztmpl,images,snippets
        prune-backups keep-all=1

pbs: pve-backup
        datastore pbs-local
        server 192.168.172.155
        content backup
        fingerprint *******
        prune-backups keep-all=1
        username root@pam

pbs: pve-backup-nfs
        disable
        datastore homeserver-nfs
        server 192.168.172.155
        content backup
        fingerprint *******
        namespace BlueHive
        prune-backups keep-all=1
        username root@pam


NAME                                                             AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD  REFRESERV
nvme-2tb                                                          312G  1.45T        0B     96K             0B      1.45T       none
nvme-2tb/subvol-110-disk-0                                       23.1G  2.57G      671M   1.91G             0B         0B       none
nvme-2tb/subvol-110-disk-0@__migration__                             -   666M         -       -              -          -          -
nvme-2tb/subvol-110-disk-0@__replicate_110-0_1765532704__            -  5.64M         -       -              -          -          -
nvme-2tb/vm-101-disk-0                                            373G   118G     11.9G   45.5G          60.9G         0B      60.9G
nvme-2tb/vm-101-disk-0@vor_Update                                    -  11.9G         -       -              -          -          -
nvme-2tb/vm-101-disk-0@__replicate_101-0_1765533304__                -  18.6M         -       -              -          -          -
nvme-2tb/vm-101-state-vor_Update                                  329G  18.6G        0B   1.87G          16.7G         0B      16.7G
nvme-2tb/vm-101-state-vor_Update@__replicate_101-0_1765533304__      -     0B         -       -              -          -          -
nvme-2tb/vm-111-disk-0                                            344G  49.3G      187M   16.8G          32.3G         0B      32.5G
nvme-2tb/vm-111-disk-0@__replicate_111-0_1765532708__                -   187M         -       -              -          -          -
nvme-2tb/vm-112-disk-0                                            344G  36.9G        0B   4.43G          32.5G         0B      32.5G
nvme-2tb/vm-112-disk-0@__replicate_112-0_1747345501__                -     0B         -       -              -          -          -
nvme-2tb/vm-113-disk-0                                            324G  13.1G        0B    902M          12.2G         0B      12.2G
nvme-2tb/vm-113-disk-0@__replicate_113-0_1748026800__                -     0B         -       -              -          -          -
nvme-2tb/vm-130-disk-0                                            328G  44.5G     9.65G   18.2G          16.7G         0B      32.5G
nvme-2tb/vm-130-disk-0@__replicate_130-0_1752879600__                -  9.65G         -       -              -          -          -
nvme-2tb/vm-130-disk-1                                            499G   800G      178G    435G           187G         0B       508G
nvme-2tb/vm-130-disk-1@__replicate_130-0_1752879600__                -   178G         -       -              -          -          -
nvme-2tb/vm-141-disk-0                                            342G  32.5G        0B   2.09G          30.4G         0B      32.5G
nvme-2tb/vm-203-disk-0                                            332G  31.2G        0B   10.8G          20.3G         0B      20.3G
nvme-2tb/vm-203-disk-0@__replicate_203-0_1765461617__                -     0B         -       -              -          -          -
nvme-2tb/vm-204-disk-0                                            332G  28.9G        0B   8.55G          20.3G         0B      20.3G
nvme-2tb/vm-204-disk-0@__replicate_204-0_1765461620__                -     0B         -       -              -          -          -
nvme-2tb/vm-205-disk-0                                            322G  15.6G        0B   5.46G          10.2G         0B      10.2G
nvme-2tb/vm-205-disk-0@__replicate_205-0_1765461624__                -     0B         -       -              -          -          -
nvme-2tb/vm-211-disk-0                                            324G  19.3G        0B   7.14G          12.2G         0B      12.2G
nvme-2tb/vm-211-disk-0@__replicate_211-0_1765465209__                -     0B         -       -              -          -          -
nvme-2tb/vm-211-disk-1                                            313G  1.02G        0B     56K          1.02G         0B      1.02G
nvme-2tb/vm-211-disk-1@__replicate_211-0_1765465209__                -     0B         -       -              -          -          -
nvme-2tb/vm-212-disk-0                                            322G  11.4G        0B   1.29G          10.2G         0B      10.2G
nvme-2tb/vm-212-disk-0@__replicate_212-0_1765465214__                -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-0                                            324G  17.6G        0B   5.44G          12.2G         0B      12.2G
nvme-2tb/vm-221-disk-0@__replicate_221-0_1765465218__                -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-1                                            314G  3.79G        0B   1.75G          2.03G         0B      2.03G
nvme-2tb/vm-221-disk-1@__replicate_221-0_1765465218__                -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-10                                           315G  3.05G        0B     56K          3.05G         0B      3.05G
nvme-2tb/vm-221-disk-10@__replicate_221-0_1765465218__               -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-11                                           316G  4.06G        0B     56K          4.06G         0B      4.06G
nvme-2tb/vm-221-disk-11@__replicate_221-0_1765465218__               -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-12                                           316G  4.06G        0B     56K          4.06G         0B      4.06G
nvme-2tb/vm-221-disk-12@__replicate_221-0_1765465218__               -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-2                                            314G  3.79G        0B   1.75G          2.03G         0B      2.03G
nvme-2tb/vm-221-disk-2@__replicate_221-0_1765465218__                -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-3                                            314G  3.79G        0B   1.75G          2.03G         0B      2.03G
nvme-2tb/vm-221-disk-3@__replicate_221-0_1765465218__                -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-4                                            314G  3.79G        0B   1.75G          2.03G         0B      2.03G
nvme-2tb/vm-221-disk-4@__replicate_221-0_1765465218__                -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-5                                            314G  3.78G        0B   1.75G          2.03G         0B      2.03G
nvme-2tb/vm-221-disk-5@__replicate_221-0_1765465218__                -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-6                                            314G  3.78G        0B   1.75G          2.03G         0B      2.03G
nvme-2tb/vm-221-disk-6@__replicate_221-0_1765465218__                -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-7                                            314G  3.78G        0B   1.75G          2.03G         0B      2.03G
nvme-2tb/vm-221-disk-7@__replicate_221-0_1765465218__                -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-8                                            314G  3.78G        0B   1.75G          2.03G         0B      2.03G
nvme-2tb/vm-221-disk-8@__replicate_221-0_1765465218__                -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-9                                            315G  3.05G        0B     56K          3.05G         0B      3.05G
nvme-2tb/vm-221-disk-9@__replicate_221-0_1765465218__                -     0B         -       -              -          -          -
nvme-2tb/vm-222-disk-0                                            324G  15.9G        0B   3.70G          12.2G         0B      12.2G
nvme-2tb/vm-222-disk-0@__replicate_222-0_1765465236__                -     0B         -       -              -          -          -
nvme-2tb/vm-223-disk-0                                            324G  15.9G        0B   3.71G          12.2G         0B      12.2G
nvme-2tb/vm-223-disk-0@__replicate_223-0_1765465241__                -     0B         -       -              -          -          -
nvme-2tb/vm-224-disk-0                                            344G  44.9G        0B   12.4G          32.5G         0B      32.5G
nvme-2tb/vm-224-disk-0@__replicate_224-0_1765462546__                -     0B         -       -              -          -          -
nvme-2tb/vm-225-disk-0                                            403G   124G     8.19G   24.2G          91.4G         0B      91.4G
nvme-2tb/vm-225-disk-0@before-LR6                                    -  8.19G         -       -              -          -          -
nvme-2tb/vm-225-disk-0@__replicate_225-0_1765462550__                -     0B         -       -              -          -          -


NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
nvme-2tb  1.81T   833G  1023G        -         -    34%    44%  1.00x    ONLINE  -
 
Last edited:
Please share this from both nodes
Bash:
cat /etc/pve/storage.cfg
zfs list -t all -ospace,refreservation
zpool list
Node3 (pve03)

Bash:
root@pve03:~# cat /etc/pve/storage.cfg && zfs list -t all -ospace,refreservation && zpool list
dir: local
        path /var/lib/vz
        content vztmpl,iso,backup
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

zfspool: nvme-2tb
        pool nvme-2tb
        content rootdir,images
        mountpoint /nvme-2tb
        nodes pve03,pve02
        sparse 0

nfs: homeserver-nfs
        export /export/Proxmox
        path /mnt/pve/homeserver-nfs
        server 172.16.20.20
        content rootdir,iso,backup,vztmpl,images,snippets
        prune-backups keep-all=1

pbs: pve-backup
        datastore pbs-local
        server 192.168.172.155
        content backup
        fingerprint *******
        prune-backups keep-all=1
        username root@pam

pbs: pve-backup-nfs
        disable
        datastore homeserver-nfs
        server 192.168.172.155
        content backup
        fingerprint *******
        namespace BlueHive
        prune-backups keep-all=1
        username root@pam



NAME                                                             AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD  REFRESERV
nvme-2tb                                                         1.37T   392G        0B     96K             0B       392G       none
nvme-2tb/subvol-110-disk-0                                       23.1G  2.56G      666M   1.91G             0B         0B       none
nvme-2tb/subvol-110-disk-0@__migration__                             -   666M         -       -              -          -          -
nvme-2tb/subvol-110-disk-0@__replicate_110-0_1765533604__            -     0B         -       -              -          -          -
nvme-2tb/vm-101-disk-0                                           1.43T   118G     11.9G   45.5G          60.9G         0B      60.9G
nvme-2tb/vm-101-disk-0@vor_Update                                    -  11.9G         -       -              -          -          -
nvme-2tb/vm-101-disk-0@__replicate_101-0_1765533615__                -     0B         -       -              -          -          -
nvme-2tb/vm-101-state-vor_Update                                 1.39T  18.6G        0B   1.87G          16.7G         0B      16.7G
nvme-2tb/vm-101-state-vor_Update@__replicate_101-0_1765533615__      -     0B         -       -              -          -          -
nvme-2tb/vm-111-disk-0                                           1.41T  49.3G        0B   16.8G          32.5G         0B      32.5G
nvme-2tb/vm-111-disk-0@__replicate_111-0_1765533608__                -     0B         -       -              -          -          -
nvme-2tb/vm-211-disk-0                                           1.39T  19.3G        0B   7.14G          12.2G         0B      12.2G
nvme-2tb/vm-211-disk-0@__replicate_211-0_1765465209__                -     0B         -       -              -          -          -
nvme-2tb/vm-211-disk-1                                           1.37T  1.02G        0B     56K          1.02G         0B      1.02G
nvme-2tb/vm-211-disk-1@__replicate_211-0_1765465209__                -     0B         -       -              -          -          -
nvme-2tb/vm-212-disk-0                                           1.38T  11.4G        0B   1.29G          10.2G         0B      10.2G
nvme-2tb/vm-212-disk-0@__replicate_212-0_1765465214__                -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-0                                           1.39T  17.6G        0B   5.44G          12.2G         0B      12.2G
nvme-2tb/vm-221-disk-0@__replicate_221-0_1765465218__                -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-1                                           1.38T  3.79G        0B   1.75G          2.03G         0B      2.03G
nvme-2tb/vm-221-disk-1@__replicate_221-0_1765465218__                -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-10                                          1.38T  3.05G        0B     56K          3.05G         0B      3.05G
nvme-2tb/vm-221-disk-10@__replicate_221-0_1765465218__               -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-11                                          1.38T  4.06G        0B     56K          4.06G         0B      4.06G
nvme-2tb/vm-221-disk-11@__replicate_221-0_1765465218__               -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-12                                          1.38T  4.06G        0B     56K          4.06G         0B      4.06G
nvme-2tb/vm-221-disk-12@__replicate_221-0_1765465218__               -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-2                                           1.38T  3.79G        0B   1.75G          2.03G         0B      2.03G
nvme-2tb/vm-221-disk-2@__replicate_221-0_1765465218__                -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-3                                           1.38T  3.79G        0B   1.75G          2.03G         0B      2.03G
nvme-2tb/vm-221-disk-3@__replicate_221-0_1765465218__                -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-4                                           1.38T  3.79G        0B   1.75G          2.03G         0B      2.03G
nvme-2tb/vm-221-disk-4@__replicate_221-0_1765465218__                -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-5                                           1.38T  3.78G        0B   1.75G          2.03G         0B      2.03G
nvme-2tb/vm-221-disk-5@__replicate_221-0_1765465218__                -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-6                                           1.38T  3.78G        0B   1.75G          2.03G         0B      2.03G
nvme-2tb/vm-221-disk-6@__replicate_221-0_1765465218__                -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-7                                           1.38T  3.78G        0B   1.75G          2.03G         0B      2.03G
nvme-2tb/vm-221-disk-7@__replicate_221-0_1765465218__                -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-8                                           1.38T  3.78G        0B   1.75G          2.03G         0B      2.03G
nvme-2tb/vm-221-disk-8@__replicate_221-0_1765465218__                -     0B         -       -              -          -          -
nvme-2tb/vm-221-disk-9                                           1.38T  3.05G        0B     56K          3.05G         0B      3.05G
nvme-2tb/vm-221-disk-9@__replicate_221-0_1765465218__                -     0B         -       -              -          -          -
nvme-2tb/vm-222-disk-0                                           1.39T  15.9G        0B   3.70G          12.2G         0B      12.2G
nvme-2tb/vm-222-disk-0@__replicate_222-0_1765465236__                -     0B         -       -              -          -          -
nvme-2tb/vm-223-disk-0                                           1.39T  15.9G        0B   3.71G          12.2G         0B      12.2G
nvme-2tb/vm-223-disk-0@__replicate_223-0_1765465241__                -     0B         -       -              -          -          -
nvme-2tb/vm-301-disk-0                                           1.41T  36.1G      837M   2.78G          32.5G         0B      32.5G
nvme-2tb/vm-301-disk-0@preUpdate_25_1                                -   826M         -       -              -          -          -
nvme-2tb/vm-301-disk-0@Test_RESET                                    -  10.6M         -       -              -          -          -
nvme-2tb/vm-301-state-preUpdate_25_1                             1.38T  8.62G        0B   1.07G          7.55G         0B      8.62G
nvme-2tb/vm-302-disk-0                                           1.40T  32.5G        0B   2.80G          29.7G         0B      32.5G

NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
nvme-2tb  1.81T   122G  1.69T        -         -     5%     6%  1.00x    ONLINE  -
 
Last edited:
Today i tried again, after i freed the space, rebooted everything and startet a migration by PDM but without success. The error is still the same and i have still no idea, why it is not possible to migrate the storage.

Code:
2025-12-12 19:35:21 conntrack state migration not supported or disabled, active connections might get dropped
2025-12-12 19:35:21 use dedicated network address for sending migration traffic (172.16.10.3)
2025-12-12 19:35:21 starting migration of VM 130 to node 'pve03' (172.16.10.3)
2025-12-12 19:35:21 found local, replicated disk 'nvme-2tb:vm-130-disk-0' (attached)
2025-12-12 19:35:21 found local, replicated disk 'nvme-2tb:vm-130-disk-1' (attached)
2025-12-12 19:35:21 scsi0: start tracking writes using block-dirty-bitmap 'repl_scsi0'
2025-12-12 19:35:21 scsi1: start tracking writes using block-dirty-bitmap 'repl_scsi1'
2025-12-12 19:35:21 replicating disk images
2025-12-12 19:35:21 start replication job
2025-12-12 19:35:21 guest => VM 130, running => 40210
2025-12-12 19:35:21 volumes => nvme-2tb:vm-130-disk-0,nvme-2tb:vm-130-disk-1
2025-12-12 19:35:23 freeze guest filesystem
2025-12-12 19:35:23 create snapshot '__replicate_130-0_1765564521__' on nvme-2tb:vm-130-disk-0
2025-12-12 19:35:23 create snapshot '__replicate_130-0_1765564521__' on nvme-2tb:vm-130-disk-1
2025-12-12 19:35:23 thaw guest filesystem
2025-12-12 19:35:23 delete previous replication snapshot '__replicate_130-0_1765564521__' on nvme-2tb:vm-130-disk-0
2025-12-12 19:35:23 end replication job with error: zfs error: cannot create snapshot 'nvme-2tb/vm-130-disk-1@__replicate_130-0_1765564521__': out of space
2025-12-12 19:35:23 ERROR: zfs error: cannot create snapshot 'nvme-2tb/vm-130-disk-1@__replicate_130-0_1765564521__': out of space
2025-12-12 19:35:23 aborting phase 1 - cleanup resources
2025-12-12 19:35:23 scsi1: removing block-dirty-bitmap 'repl_scsi1'
2025-12-12 19:35:23 scsi0: removing block-dirty-bitmap 'repl_scsi0'
2025-12-12 19:35:23 ERROR: migration aborted (duration 00:00:02): zfs error: cannot create snapshot 'nvme-2tb/vm-130-disk-1@__replicate_130-0_1765564521__': out of space
TASK ERROR: migration aborted

I am working at the moment to prepare a productive VMware 7.0+ environment in the exactly same way like the environment described above. But without a comprehensible solution, I cannot implement a productive migration. :(

Hopefully someone has an idea to solve...

Best regards.
 
Last edited:
hello, i would, clean up / delete zfs snapshot from both disks:
* 'nvme-2tb:vm-130-disk-0'
* 'nvme-2tb:vm-130-disk-1'

Code:
zfs list -t snapshot -r nvme-2tb/vm-130-disk-0
zfs list -t snapshot -r nvme-2tb/vm-130-disk-1
zfs destroy -r nvme-2tb/vm-130-disk-0@<snapshot-name> # please check the file name!
zfs destroy -r nvme-2tb/vm-130-disk-1@<snapshot-name> # please check the file name!
 
Good evening!

Bash:
root@pve02:~# zfs list -t snapshot -r nvme-2tb/vm-130-disk-0
NAME                                                    USED  AVAIL  REFER  MOUNTPOINT
nvme-2tb/vm-130-disk-0@__replicate_130-0_1752879600__  9.66G      -  12.0G  -
Bash:
root@pve02:~# zfs list -t snapshot -r nvme-2tb/vm-130-disk-1
NAME                                                    USED  AVAIL  REFER  MOUNTPOINT
nvme-2tb/vm-130-disk-1@__replicate_130-0_1752879600__   179G      -   292G  -

@news: That worked! I am absolutely surprised but very happy. :)

So for my comprehension. There was a (local) snapshot of the 500GB disk. Because of this, the 500GB disk and the snapshot with about 292G was too much space? (Should not by 1.7 TB free space) So unfortunately there was no way from the GUI to see this and i am a little bit "frustrated" about this, because i don't know when and how this snapshots was created and how long this situation had been going on. Perhaps it has something to do with replication, but i don't know why.

With root@pve02:~# zfs list -t snapshot i can see, that there are snapshots for every disk this on the node, but i don't know why. The only 2 snapshots i am aware of, are snapshot with a real title i named by taking a snapshot in PVE.
Code:
root@pve02:~# zfs list -t snapshot
NAME                                                              USED  AVAIL  REFER  MOUNTPOINT
nvme-2tb/subvol-110-disk-0@__migration__                          666M      -  1.14G  -
nvme-2tb/subvol-110-disk-0@__replicate_110-0_1765657805__        4.71M      -  1.92G  -
nvme-2tb/vm-101-disk-0@vor_Update                                12.9G      -  46.2G  -
nvme-2tb/vm-101-disk-0@__replicate_101-0_1765658405__            5.46M      -  45.6G  -
nvme-2tb/vm-101-state-vor_Update@__replicate_101-0_1765658405__     0B      -  1.87G  -
nvme-2tb/vm-111-disk-0@__replicate_111-0_1765657809__             251M      -  16.7G  -
nvme-2tb/vm-112-disk-0@__replicate_112-0_1747345501__               0B      -  4.43G  -
nvme-2tb/vm-113-disk-0@__replicate_113-0_1748026800__               0B      -   902M  -
nvme-2tb/vm-130-disk-0@__replicate_130-0_1752879600__            9.66G      -  12.0G  -
nvme-2tb/vm-203-disk-0@__replicate_203-0_1765461617__               0B      -  10.8G  -
nvme-2tb/vm-204-disk-0@__replicate_204-0_1765461620__               0B      -  8.55G  -
nvme-2tb/vm-205-disk-0@__replicate_205-0_1765461624__               0B      -  5.46G  -
nvme-2tb/vm-211-disk-0@__replicate_211-0_1765465209__               0B      -  7.14G  -
nvme-2tb/vm-211-disk-1@__replicate_211-0_1765465209__               0B      -    56K  -
nvme-2tb/vm-212-disk-0@__replicate_212-0_1765465214__               0B      -  1.29G  -
nvme-2tb/vm-221-disk-0@__replicate_221-0_1765465218__               0B      -  5.44G  -
nvme-2tb/vm-221-disk-1@__replicate_221-0_1765465218__               0B      -  1.75G  -
nvme-2tb/vm-221-disk-10@__replicate_221-0_1765465218__              0B      -    56K  -
nvme-2tb/vm-221-disk-11@__replicate_221-0_1765465218__              0B      -    56K  -
nvme-2tb/vm-221-disk-12@__replicate_221-0_1765465218__              0B      -    56K  -
nvme-2tb/vm-221-disk-2@__replicate_221-0_1765465218__               0B      -  1.75G  -
nvme-2tb/vm-221-disk-3@__replicate_221-0_1765465218__               0B      -  1.75G  -
nvme-2tb/vm-221-disk-4@__replicate_221-0_1765465218__               0B      -  1.75G  -
nvme-2tb/vm-221-disk-5@__replicate_221-0_1765465218__               0B      -  1.75G  -
nvme-2tb/vm-221-disk-6@__replicate_221-0_1765465218__               0B      -  1.75G  -
nvme-2tb/vm-221-disk-7@__replicate_221-0_1765465218__               0B      -  1.75G  -
nvme-2tb/vm-221-disk-8@__replicate_221-0_1765465218__               0B      -  1.75G  -
nvme-2tb/vm-221-disk-9@__replicate_221-0_1765465218__               0B      -    56K  -
nvme-2tb/vm-222-disk-0@__replicate_222-0_1765465236__               0B      -  3.70G  -
nvme-2tb/vm-223-disk-0@__replicate_223-0_1765465241__               0B      -  3.71G  -
nvme-2tb/vm-224-disk-0@__replicate_224-0_1765462546__               0B      -  12.4G  -
nvme-2tb/vm-225-disk-0@before-LR6                                8.19G      -  10.3G  -
nvme-2tb/vm-225-disk-0@__replicate_225-0_1765462550__               0B      -  24.2G  -

So thanks a lot for the quick help and the solution!

If it's not too much to ask, I would be grateful for an explanation. :)



Best regards.
 
Please see the german tool check-zfs-replication

# https://github.com/bashclub/check-zfs-replication

then you can run checkzfs --sourceonly
Ok thanks for this nice tip. :-)

But i still don't understand, why 500GB + f.e. 292GB could not be replicated with an error "out of space" when the target storage has about 1.7TB free space.

What could be the cause for this?

Edit: I hat a look at only zfs list and there i can see about 944 GB of disk usage. No idea where this come from, but still under the limit of free space at 1.7TB.

Code:
root@pve02:~# zfs list
NAME                               USED  AVAIL  REFER  MOUNTPOINT
nvme-2tb                          1.61T   151G    96K  /nvme-2tb
nvme-2tb/subvol-110-disk-0        2.58G  23.1G  1.92G  /nvme-2tb/subvol-110-disk-0
nvme-2tb/vm-101-disk-0             119G   212G  45.6G  -
nvme-2tb/vm-101-state-vor_Update  18.6G   168G  1.87G  -
nvme-2tb/vm-111-disk-0            49.2G   184G  16.7G  -
nvme-2tb/vm-112-disk-0            36.9G   184G  4.43G  -
nvme-2tb/vm-113-disk-0            13.1G   163G   902M  -
nvme-2tb/vm-130-disk-0            60.4G   184G  18.2G  -
nvme-2tb/vm-130-disk-1             944G   659G   436G  -
nvme-2tb/vm-141-disk-0            32.5G   182G  2.09G  -
nvme-2tb/vm-203-disk-0            31.2G   171G  10.8G  -
nvme-2tb/vm-204-disk-0            28.9G   171G  8.55G  -
nvme-2tb/vm-205-disk-0            15.6G   161G  5.46G  -
nvme-2tb/vm-211-disk-0            19.3G   163G  7.14G  -
nvme-2tb/vm-211-disk-1            1.02G   152G    56K  -
nvme-2tb/vm-212-disk-0            11.4G   161G  1.29G  -
nvme-2tb/vm-221-disk-0            17.6G   163G  5.44G  -
nvme-2tb/vm-221-disk-1            3.79G   153G  1.75G  -
nvme-2tb/vm-221-disk-10           3.05G   154G    56K  -
nvme-2tb/vm-221-disk-11           4.06G   155G    56K  -
nvme-2tb/vm-221-disk-12           4.06G   155G    56K  -
nvme-2tb/vm-221-disk-2            3.79G   153G  1.75G  -
nvme-2tb/vm-221-disk-3            3.79G   153G  1.75G  -
nvme-2tb/vm-221-disk-4            3.79G   153G  1.75G  -
nvme-2tb/vm-221-disk-5            3.78G   153G  1.75G  -
nvme-2tb/vm-221-disk-6            3.78G   153G  1.75G  -
nvme-2tb/vm-221-disk-7            3.78G   153G  1.75G  -
nvme-2tb/vm-221-disk-8            3.78G   153G  1.75G  -
nvme-2tb/vm-221-disk-9            3.05G   154G    56K  -
nvme-2tb/vm-222-disk-0            15.9G   163G  3.70G  -
nvme-2tb/vm-223-disk-0            15.9G   163G  3.71G  -
nvme-2tb/vm-224-disk-0            44.9G   184G  12.4G  -
nvme-2tb/vm-225-disk-0             124G   243G  24.2G  -

Here i can see nvme-2tb/vm-130-disk-1 944G 659G 436G -

I am using the disk in PBS with about 283 GB. Therefore, I cannot comprehend it. :confused:
 
Last edited:
please read
Code:
root@pve02:~# zfs list
NAME                               USED  AVAIL  REFER  MOUNTPOINT
nvme-2tb/vm-130-disk-1             944G   659G   436G  -

your zfs dataset vm-130-disk-1 use 436G and hat also 944G = 436G + X, with snapshot,
so you can delete some of them.

The same on the other zfs datasets.
Code:
nvme-2tb/vm-101-state-vor_Update  18.6G   168G  1.87G  -
18.6G = 1.87G + Y; Y ~ snapshot