Replication error out of space

tomas12343

New Member
Jun 6, 2020
20
0
1
43
Hello,
I am trying to replicate a vm with 3 virtual hard disks (0,1,2). hdd0 is 50gb, hdd1 is 1tb and hdd is 1tb, total about 2tb. I have a zfs disk mounted on node2 and I sucessfully replicate 2 vm's. Disk has around 3tb free.

When I try to replicate the 2tb vm to the 3tb disk, I get the error:
2020-06-07 07:03:00 208-0: start replication job
2020-06-07 07:03:00 208-0: guest => VM 208, running => 2920
2020-06-07 07:03:00 208-0: volumes => Disk5:vm-208-disk-0,Disk5:vm-208-disk-1,Disk5:vm-208-disk-2
2020-06-07 07:03:01 208-0: create snapshot '__replicate_208-0_1591502580__' on Disk5:vm-208-disk-0
2020-06-07 07:03:02 208-0: create snapshot '__replicate_208-0_1591502580__' on Disk5:vm-208-disk-1
2020-06-07 07:03:03 208-0: create snapshot '__replicate_208-0_1591502580__' on Disk5:vm-208-disk-2
2020-06-07 07:03:03 208-0: delete previous replication snapshot '__replicate_208-0_1591502580__' on Disk5:vm-208-disk-0
2020-06-07 07:03:04 208-0: delete previous replication snapshot '__replicate_208-0_1591502580__' on Disk5:vm-208-disk-1
2020-06-07 07:03:04 208-0: end replication job with error: zfs error: cannot create snapshot 'Disk5/vm-208-disk-2@__replicate_208-0_1591502580__': out of space
 
Last edited:
do you have some reservations on destination ? although you already said that destination has 3 tb free ...
 
the disk on destination is new. I replicated 2 vm's (one 50gb and the other 250gb). I tried to disable the replication of disk 2 of the vm and with disk(0,1) 1tb total, the backup succeeds. When I try to add the disk(2), the backup fails with out of space message. What is going on??
 
another question: vm snapshots (or any other zfs snapshots which will increase the used space) ? you should post some zfs list on source and destination storages
 
I think that you are right. Source disk had 1.3tb free, destination disk 3.6 tb free. When I replicate disks 0 and 1, free space to the source disk goes to almost 0. So, it must be a problem with the source disk, but why is that?
 
you can "lose" significant space if source and destination are different raid types, different ashift, different number of disks etc. (and sometimes significant is really significant)
but I will bet (blindly :-P) on the snapshots at source; maybe the disk is 1TB, but exactly what is the used space on that zvol ? only 1TB or more ? (live I've said, snapshots count as used space and when replicate the snapshots are usually transferred)

try a zfs list -r /path/to/machine-zvol (and maybe zfs list -t snapshot -r /path/to/machine-zvol) to see further info
 
you can "lose" significant space if source and destination are different raid types, different ashift, different number of disks etc. (and sometimes significant is really significant)
but I will bet (blindly :p) on the snapshots at source; maybe the disk is 1TB, but exactly what is the used space on that zvol ? only 1TB or more ? (live I've said, snapshots count as used space and when replicate the snapshots are usually transferred)

try a zfs list -r /path/to/machine-zvol (and maybe zfs list -t snapshot -r /path/to/machine-zvol) to see further info
Both are 4tb one disk hdd's, no RAID configuration. They are even from the same vendor (Western Digital)
 
zfs list -r /path/to/machine-zvol output:
NAME USED AVAIL REFER MOUNTPOINT
Disk5/vm-208-disk-0 960G 1.12T 838G -

zfs list -t snapshot -r /path/to/machine-zvol
gave no results because I had replication off.
 
Noob question: if I want to add another disk to zfs pool (to increase the size of the pool and move on with the replication), how do I do it?
Edit: found it - zpool add pool disk
 
Last edited: