zfs replication error - code 255

rafafell

Well-Known Member
Sep 24, 2016
61
2
48
36
Hi,

I'm replicating two virtual machines (vm01 and vm02) from node A to node B. vm01 replicates properly, but vm02 doesn't complete the replication.

vm02 has three hds. Apparently everything goes smoothly in the replication of disk-0, disk-1 seems to repeat the same piece of replication several times and disk-02 does not even enter the replication process.

What could be happening? any indication?

best

Code:
2021-07-01 17:05:01 110-1: start replication job
2021-07-01 17:05:04 110-1: guest => VM 110, running => 30296
2021-07-01 17:05:04 110-1: volumes => zfs-rpl01-vms:vm-110-disk-0,zfs-rpl01-vms:vm-110-disk-1,zfs-rpl01-vms:vm-110-disk-2
2021-07-01 17:05:04 110-1: create snapshot '__replicate_110-1_1625169901__' on zfs-rpl01-vms:vm-110-disk-0
2021-07-01 17:05:05 110-1: create snapshot '__replicate_110-1_1625169901__' on zfs-rpl01-vms:vm-110-disk-1
2021-07-01 17:05:05 110-1: create snapshot '__replicate_110-1_1625169901__' on zfs-rpl01-vms:vm-110-disk-2
2021-07-01 17:05:06 110-1: using secure transmission, rate limit: 30 MByte/s
2021-07-01 17:05:06 110-1: full sync 'zfs-rpl01-vms:vm-110-disk-0' (__replicate_110-1_1625169901__)
2021-07-01 17:05:06 110-1: using a bandwidth limit of 30000000 bps for transferring 'zfs-rpl01-vms:vm-110-disk-0'
2021-07-01 17:05:07 110-1: volume 'zfs-rpl01/vms/vm-110-disk-0' already exists
2021-07-01 17:05:07 110-1: 0 B 0.0 B 0.91 s 0 B/s 0.00 B/s
2021-07-01 17:05:07 110-1: write: Broken pipe
2021-07-01 17:05:07 110-1: full send of zfs-rpl01/vms/vm-110-disk-0@rep_lantrivm03_2020-05-12_18:45:01 estimated size is 28.2G
2021-07-01 17:05:07 110-1: send from @rep_lantrivm03_2020-05-12_18:45:01 to zfs-rpl01/vms/vm-110-disk-0@__replicate_110-0_1625106720__ estimated size is 3.72G
2021-07-01 17:05:07 110-1: send from @__replicate_110-0_1625106720__ to zfs-rpl01/vms/vm-110-disk-0@__replicate_110-1_1625169901__ estimated size is 348K
2021-07-01 17:05:07 110-1: total estimated size is 32.0G
2021-07-01 17:05:08 110-1: warning: cannot send 'zfs-rpl01/vms/vm-110-disk-0@rep_lantrivm03_2020-05-12_18:45:01': Broken pipe
2021-07-01 17:05:08 110-1: warning: cannot send 'zfs-rpl01/vms/vm-110-disk-0@__replicate_110-0_1625106720__': Broken pipe
2021-07-01 17:05:08 110-1: warning: cannot send 'zfs-rpl01/vms/vm-110-disk-0@__replicate_110-1_1625169901__': Broken pipe
2021-07-01 17:05:08 110-1: cannot send 'zfs-rpl01/vms/vm-110-disk-0': I/O error
2021-07-01 17:05:08 110-1: command 'zfs send -Rpv -- zfs-rpl01/vms/vm-110-disk-0@__replicate_110-1_1625169901__' failed: exit code 1
2021-07-01 17:05:08 110-1: delete previous replication snapshot '__replicate_110-1_1625169901__' on zfs-rpl01-vms:vm-110-disk-0
2021-07-01 17:05:08 110-1: delete previous replication snapshot '__replicate_110-1_1625169901__' on zfs-rpl01-vms:vm-110-disk-1
2021-07-01 17:05:09 110-1: delete previous replication snapshot '__replicate_110-1_1625169901__' on zfs-rpl01-vms:vm-110-disk-2


end replication job with error: command 'set -o pipefail && pvesm export zfs-rpl01-vms:vm-110-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_110-1_1625169901__ | /usr/bin/cstream -t 30000000 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve02' root@200.145.122.118 -- pvesm import zfs-rpl01-vms:vm-110-disk-0 zfs - -with-snapshots 1 -allow-rename 0' failed: exit code 255
 
Last edited:
Hi,
Code:
2021-07-01 17:05:07 110-1: volume 'zfs-rpl01/vms/vm-110-disk-0' already exists
2021-07-01 17:05:07 110-1: 0 B 0.0 B 0.91 s 0 B/s 0.00 B/s
2021-07-01 17:05:07 110-1: write: Broken pipe
seems like a volume with the same name already exists on the target.
 
What's the solution though? I have this same or very similar problem but when I go to where it says it exits there's only an empty directory.

Or maybe I'm looking in the wrong place or on the wrong node? I am assuming this means the replication target file already exists... Where would this actually be by default?
 
Last edited:
In my particular case 'rpool/data/vm-101-disk-0' isn't actually on the target node.

On the target node I do see that filename in:

/dev/rpool/data

and

/dev/zvol/rpool/data

I'm guessing the second one since the storage is referred to as zvol in the gui?


And after removing the "101" files from both locations on the replication destination node I still get the same failure.
 
Last edited:
Ok... so in case anyone stumbled here to find a solution after my stumbles above the solution was I had to use the GUI on the destination node and under my local-zfs | VM Disks area REMOVED what it still claimed was there. I had already deleted the files from the destination node but I'm guessing either the configuration doesn't automatically recognize this or perhaps a link to the removed files still existed.

Once I removed it from the destination nodes VM Disks area I was able to replicate from the other node to the destination node.



I take it back... it was the other way around. The "physical files" I deleted were the links... the actual data (of course) lived in the ZFS volume.
 
Last edited:
Ok... so in case anyone stumbled here to find a solution after my stumbles above the solution was I had to use the GUI on the destination node and under my local-zfs | VM Disks area REMOVED what it still claimed was there. I had already deleted the files from the destination node but I'm guessing either the configuration doesn't automatically recognize this or perhaps a link to the removed files still existed.

Once I removed it from the destination nodes VM Disks area I was able to replicate from the other node to the destination node.



I take it back... it was the other way around. The "physical files" I deleted were the links... the actual data (of course) lived in the ZFS volume.
Glad you were able to solve the issue. Yes, ZFS volumes will not show up as traditional files, but are managed via the zfs command line tool, e.g.
Code:
zfs list
zfs destroy <dataset>
See man zfs for more.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!