[SOLVED] Replication fail with empty target Storage "out of Space"

corin.corvus

Member
Apr 8, 2020
117
11
23
36
Hi,

i have a problem.

i want to replicate a Server wirh multiple HDDs.
1689941147998.png

If i enable Replication to another Node i get "out of Space" Error.
1689941182222.png
Code:
2023-07-21 14:04:06 132-0: start replication job
2023-07-21 14:04:07 132-0: guest => VM 132, running => 1906
2023-07-21 14:04:07 132-0: volumes => ZFS-01:vm-132-disk-0,ZFS-01:vm-132-disk-1,ZFS-01:vm-132-disk-2,ZFS-01:vm-132-disk-3,ZFS-01:vm-132-disk-4
2023-07-21 14:04:08 132-0: freeze guest filesystem
2023-07-21 14:04:09 132-0: create snapshot '__replicate_132-0_1689941046__' on ZFS-01:vm-132-disk-0
2023-07-21 14:04:09 132-0: create snapshot '__replicate_132-0_1689941046__' on ZFS-01:vm-132-disk-1
2023-07-21 14:04:09 132-0: create snapshot '__replicate_132-0_1689941046__' on ZFS-01:vm-132-disk-2
2023-07-21 14:04:09 132-0: create snapshot '__replicate_132-0_1689941046__' on ZFS-01:vm-132-disk-3
2023-07-21 14:04:09 132-0: thaw guest filesystem
2023-07-21 14:04:09 132-0: delete previous replication snapshot '__replicate_132-0_1689941046__' on ZFS-01:vm-132-disk-0
2023-07-21 14:04:09 132-0: delete previous replication snapshot '__replicate_132-0_1689941046__' on ZFS-01:vm-132-disk-1
2023-07-21 14:04:09 132-0: delete previous replication snapshot '__replicate_132-0_1689941046__' on ZFS-01:vm-132-disk-2
2023-07-21 14:04:09 132-0: end replication job with error: zfs error: cannot create snapshot 'ZFS-01/vm-132-disk-3@__replicate_132-0_1689941046__': out of space

On the Target Node the Space is free:
1689941223234.png

Why i get the Error? Other VMs work fine.

Other VMs:
1689942021566.png

Thanks
 
Last edited:
Hi, what is the output of zfs list -t all and zpool status? What about the replication target?
 
  • Like
Reactions: corin.corvus
Hi, what is the output of zfs list -t all and zpool status? What about the replication target?


Target N-1 at the Moment:
Code:
root@N-1:~# zfs list -t all
NAME                                                  USED  AVAIL     REFER  MOUNTPOINT
ZFS-01                                                323G   576G       96K  /ZFS-01
ZFS-01/vm-131-disk-0                                 13.1G   587G     2.78G  -
ZFS-01/vm-131-disk-0@__replicate_131-0_1689940806__  3.72M      -     2.78G  -
ZFS-01/vm-131-disk-0@__replicate_131-1_1689944401__   436K      -     2.78G  -
ZFS-01/vm-131-disk-1                                 27.0G   602G     1.16G  -
ZFS-01/vm-131-disk-1@__replicate_131-0_1689940806__  62.3M      -     1.16G  -
ZFS-01/vm-131-disk-1@__replicate_131-1_1689944401__  7.88M      -     1.16G  -
ZFS-01/vm-131-disk-2                                 39.1G   602G     13.3G  -
ZFS-01/vm-131-disk-2@__replicate_131-0_1689940806__  26.7M      -     13.3G  -
ZFS-01/vm-131-disk-2@__replicate_131-1_1689944401__  1.85M      -     13.3G  -
ZFS-01/vm-133-disk-0                                 16.2G   587G     5.84G  -
ZFS-01/vm-133-disk-0@__replicate_133-0_1689944414__  2.15M      -     5.84G  -
ZFS-01/vm-133-disk-0@__replicate_133-1_1689944428__  1.93M      -     5.84G  -
ZFS-01/vm-133-disk-1                                 38.4G   602G     12.6G  -
ZFS-01/vm-133-disk-1@__replicate_133-0_1689944414__   744K      -     12.6G  -
ZFS-01/vm-133-disk-1@__replicate_133-1_1689944428__   728K      -     12.6G  -
ZFS-01/vm-133-disk-2                                 32.8G   602G     7.00G  -
ZFS-01/vm-133-disk-2@__replicate_133-0_1689944414__  1.89M      -     7.00G  -
ZFS-01/vm-133-disk-2@__replicate_133-1_1689944428__  1.57M      -     7.00G  -
ZFS-01/vm-139-disk-0                                 16.6G   587G     6.30G  -
ZFS-01/vm-139-disk-0@__replicate_139-0_1689942492__  71.5M      -     6.30G  -
ZFS-01/vm-139-disk-1                                 68.2G   628G     16.6G  -
ZFS-01/vm-139-disk-1@__replicate_139-0_1689942492__  95.2M      -     16.6G  -
ZFS-01/vm-139-disk-2                                 37.2G   602G     11.4G  -
ZFS-01/vm-139-disk-2@__replicate_139-0_1689942492__  39.3M      -     11.4G  -
ZFS-01/vm-201-disk-0                                 17.0G   587G     6.65G  -
ZFS-01/vm-201-disk-0@__replicate_201-0_1689944441__  2.30M      -     6.65G  -
ZFS-01/vm-201-disk-0@__replicate_201-1_1689944448__  2.14M      -     6.65G  -
ZFS-01/vm-301-disk-0                                 17.3G   587G     6.93G  -
ZFS-01/vm-301-disk-0@__replicate_301-1_1689943514__  15.5M      -     6.93G  -
ZFS-01/vm-301-disk-0@__replicate_301-0_1689944407__     0B      -     6.93G  -
root@N-1:~# zpool status
  pool: ZFS-01
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        ZFS-01      ONLINE       0     0     0
          sdb       ONLINE       0     0     0

Source N-3 at the Moment:
Code:
root@N-3:~# zfs list -t all
NAME                                                  USED  AVAIL     REFER  MOUNTPOINT
ZFS-01                                                793G   107G       96K  /ZFS-01
ZFS-01/vm-131-disk-0                                 13.1G   117G     2.78G  -
ZFS-01/vm-131-disk-0@__replicate_131-0_1689940806__  3.73M      -     2.78G  -
ZFS-01/vm-131-disk-0@__replicate_131-1_1689944401__     0B      -     2.78G  -
ZFS-01/vm-131-disk-1                                 27.0G   132G     1.16G  -
ZFS-01/vm-131-disk-1@__replicate_131-0_1689940806__  62.4M      -     1.16G  -
ZFS-01/vm-131-disk-1@__replicate_131-1_1689944401__     0B      -     1.16G  -
ZFS-01/vm-131-disk-2                                 39.1G   132G     13.3G  -
ZFS-01/vm-131-disk-2@__replicate_131-0_1689940806__  26.7M      -     13.3G  -
ZFS-01/vm-131-disk-2@__replicate_131-1_1689944401__     0B      -     13.3G  -
ZFS-01/vm-132-disk-0                                 10.3G   109G     8.37G  -
ZFS-01/vm-132-disk-1                                 25.8G   131G     1.10G  -
ZFS-01/vm-132-disk-2                                 25.8G   120G     12.4G  -
ZFS-01/vm-132-disk-3                                  258G   119G      245G  -
ZFS-01/vm-132-disk-4                                  155G   116G      146G  -
ZFS-01/vm-133-disk-0                                 16.2G   117G     5.84G  -
ZFS-01/vm-133-disk-0@__replicate_133-1_1689943534__  12.5M      -     5.84G  -
ZFS-01/vm-133-disk-0@__replicate_133-0_1689944414__     0B      -     5.84G  -
ZFS-01/vm-133-disk-1                                 38.4G   132G     12.6G  -
ZFS-01/vm-133-disk-1@__replicate_133-1_1689943534__  2.17M      -     12.6G  -
ZFS-01/vm-133-disk-1@__replicate_133-0_1689944414__     0B      -     12.6G  -
ZFS-01/vm-133-disk-2                                 32.8G   132G     7.00G  -
ZFS-01/vm-133-disk-2@__replicate_133-1_1689943534__  7.68M      -     7.00G  -
ZFS-01/vm-133-disk-2@__replicate_133-0_1689944414__     0B      -     7.00G  -
ZFS-01/vm-201-disk-0                                 17.0G   117G     6.65G  -
ZFS-01/vm-201-disk-0@__replicate_201-1_1689943501__  13.8M      -     6.65G  -
ZFS-01/vm-201-disk-0@__replicate_201-0_1689944441__     0B      -     6.65G  -
ZFS-01/vm-202-disk-0                                 25.3G   127G     4.71G  -
ZFS-01/vm-202-disk-0@__replicate_202-0_1689879605__  1.37M      -     4.71G  -
ZFS-01/vm-202-disk-0@__replicate_202-1_1689879626__     0B      -     4.71G  -
ZFS-01/vm-301-disk-0                                 17.3G   117G     6.93G  -
ZFS-01/vm-301-disk-0@__replicate_301-0_1689944407__  2.36M      -     6.93G  -
ZFS-01/vm-301-disk-0@__replicate_301-1_1689944414__     0B      -     6.93G  -
ZFS-01/vm-302-disk-0                                 91.9G   158G     40.4G  -
ZFS-01/vm-302-disk-0@__replicate_302-1_1689944407__  9.66M      -     40.4G  -
root@N-3:~# zpool status
  pool: ZFS-01
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        ZFS-01      ONLINE       0     0     0
          sdb       ONLINE       0     0     0

In the Moment i tested, N-1 war completely empty and no replication job was active except The 132 VM.

UPDATE:
Offline Migration stops with the same error on disk3
 
Last edited:
Does disk3 has refreservation set? See also https://forum.proxmox.com/threads/zfs-error-cannot-create-snapshot-out-of-space.110530/post-476334

Please post the output of zfs get all ZFS-01/vm-132-disk-3
Here are the Settings. All Disks has the same Settings.
1689946779994.png

Code:
NAME                  PROPERTY              VALUE                  SOURCE
ZFS-01/vm-132-disk-3  type                  volume                 -
ZFS-01/vm-132-disk-3  creation              Tue Jul 11 22:54 2023  -
ZFS-01/vm-132-disk-3  used                  258G                   -
ZFS-01/vm-132-disk-3  available             119G                   -
ZFS-01/vm-132-disk-3  referenced            245G                   -
ZFS-01/vm-132-disk-3  compressratio         1.01x                  -
ZFS-01/vm-132-disk-3  reservation           none                   default
ZFS-01/vm-132-disk-3  volsize               250G                   local
ZFS-01/vm-132-disk-3  volblocksize          8K                     default
ZFS-01/vm-132-disk-3  checksum              on                     default
ZFS-01/vm-132-disk-3  compression           on                     inherited from ZFS-01
ZFS-01/vm-132-disk-3  readonly              off                    default
ZFS-01/vm-132-disk-3  createtxg             33                     -
ZFS-01/vm-132-disk-3  copies                1                      default
ZFS-01/vm-132-disk-3  refreservation        258G                   local
ZFS-01/vm-132-disk-3  guid                  7785597640215044502    -
ZFS-01/vm-132-disk-3  primarycache          all                    default
ZFS-01/vm-132-disk-3  secondarycache        all                    default
ZFS-01/vm-132-disk-3  usedbysnapshots       0B                     -
ZFS-01/vm-132-disk-3  usedbydataset         245G                   -
ZFS-01/vm-132-disk-3  usedbychildren        0B                     -
ZFS-01/vm-132-disk-3  usedbyrefreservation  12.6G                  -
ZFS-01/vm-132-disk-3  logbias               latency                default
ZFS-01/vm-132-disk-3  objsetid              68                     -
ZFS-01/vm-132-disk-3  dedup                 off                    default
ZFS-01/vm-132-disk-3  mlslabel              none                   default
ZFS-01/vm-132-disk-3  sync                  standard               default
ZFS-01/vm-132-disk-3  refcompressratio      1.01x                  -
ZFS-01/vm-132-disk-3  written               245G                   -
ZFS-01/vm-132-disk-3  logicalused           247G                   -
ZFS-01/vm-132-disk-3  logicalreferenced     247G                   -
ZFS-01/vm-132-disk-3  volmode               default                default
ZFS-01/vm-132-disk-3  snapshot_limit        none                   default
ZFS-01/vm-132-disk-3  snapshot_count        none                   default
ZFS-01/vm-132-disk-3  snapdev               hidden                 default
ZFS-01/vm-132-disk-3  context               none                   default
ZFS-01/vm-132-disk-3  fscontext             none                   default
ZFS-01/vm-132-disk-3  defcontext            none                   default
ZFS-01/vm-132-disk-3  rootcontext           none                   default
ZFS-01/vm-132-disk-3  redundant_metadata    all                    default
ZFS-01/vm-132-disk-3  encryption            off                    default
ZFS-01/vm-132-disk-3  keylocation           none                   default
ZFS-01/vm-132-disk-3  keyformat             none                   default
ZFS-01/vm-132-disk-3  pbkdf2iters           0                      default

Okay, Refreservation is set.

I removed it, add the info in the VM Notes.

Thank you, i think, its the Solution!
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!