Proxmox VE 5.1: HA, replication and migration...problem on ZFS Local storage ?

Dubard

Active Member
Oct 5, 2016
61
2
28
47
Switzerland
Hi everybody,

I want to migrate a container from one server (node-1) to another server (node-2) (the replication for this CT is active only to a 3rd server (node-3)). I first created a group for the HA where the 3rd server (node-3) has priority. I then set the HA for the CT to this group.
When I shut down the node-1 server, the CT was stopped and then moved to node-2. I'm now trying to move the CT back to node-1 and there... problem !

Code:
task started by HA resource agent
2018-01-30 12:26:35 starting migration of CT 101 to node 'monserveur' (10.12.1.5)
2018-01-30 12:26:35 found local volume 'zfs-cadstorage-CT:subvol-101-disk-1' (in current VM config)
full send of cadzfs/CT/subvol-101-disk-1@__replicate_101-0_1517310001__ estimated size is 3.57G
send from @__replicate_101-0_1517310001__ to cadzfs/CT/subvol-101-disk-1@__migration__ estimated size is 24.8M
total estimated size is 3.60G
TIME SENT SNAPSHOT
cadzfs/CT/subvol-101-disk-1 name cadzfs/CT/subvol-101-disk-1 -
volume 'cadzfs/CT/subvol-101-disk-1' already exists
command 'zfs send -Rpv -- cadzfs/CT/subvol-101-disk-1@__migration__' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2018-01-30 12:26:36 ERROR: command 'set -o pipefail && pvesm export zfs-cadstorage-CT:subvol-101-disk-1 zfs - -with-snapshots 0 -snapshot __migration__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=monserveur' root@10.12.1.5 -- pvesm import zfs-cadstorage-CT:subvol-101-disk-1 zfs - -with-snapshots 0 -delete-snapshot __migration__' failed: exit code 255
2018-01-30 12:26:36 aborting phase 1 - cleanup resources
2018-01-30 12:26:36 ERROR: found stale volume copy 'zfs-cadstorage-CT:subvol-101-disk-1' on node 'monserveur'
2018-01-30 12:26:36 start final cleanup
2018-01-30 12:26:36 ERROR: migration aborted (duration 00:00:01): command 'set -o pipefail && pvesm export zfs-cadstorage-CT:subvol-101-disk-1 zfs - -with-snapshots 0 -snapshot __migration__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=monserveur' root@10.12.1.5 -- pvesm import zfs-cadstorage-CT:subvol-101-disk-1 zfs - -with-snapshots 0 -delete-snapshot __migration__' failed: exit code 255
TASK ERROR: migration aborted

Here is the Proxmox version of my nodes:

Code:
root@monserveur:~# pveversion -v
proxmox-ve: 5.1-38 (running kernel: 4.13.13-5-pve)
pve-manager: 5.1-43 (running version: 5.1-43/bdb08029)
pve-kernel-4.13.4-1-pve: 4.13.4-26
pve-kernel-4.13.13-5-pve: 4.13.13-38
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-20
pve-firmware: 2.0-3
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-16
pve-qemu-kvm: 2.9.1-6
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.4-pve2~bpo9
ceph: 12.2.2-pve1
root@monserveur:~#

...In order to solve the problem I had to do this on node-1:

Code:
root@monserveur:~# zfs destroy cadzfs/CT/subvol-101-disk-1
cannot destroy 'cadzfs/CT/subvol-101-disk-1': filesystem has children
use '-r' to destroy the following datasets:
cadzfs/CT/subvol-101-disk-1@__replicate_101-0_1517310001__

root@monserveur:~# zfs destroy -r cadzfs/CT/subvol-101-disk-1

...then from the GUI, I was able to make the migration which went well.

Has anyone ever encountered this problem ?

Thanks
 
Thanks @aychprox for your reply.

I didn't want to keep this configuration for a production environment... I just ran some tests on my Cluster to check the functioning of some options.


I also noticed that if a replication was active for all VMs from node-1 to node-2. When i migrate VMs to node-3, replication of the VMs fails towards node-2 with this kind of message:

Code:
018-01-31 11:46:01 108-0: start replication job
2018-01-31 11:46:01 108-0: guest => VM 108, running => 28632
2018-01-31 11:46:01 108-0: volumes => zfs-cadstorage-VM:vm-108-disk-2
2018-01-31 11:46:02 108-0: create snapshot '__replicate_108-0_1517395561__' on zfs-cadstorage-VM:vm-108-disk-2
2018-01-31 11:46:02 108-0: full sync 'zfs-cadstorage-VM:vm-108-disk-2' (__replicate_108-0_1517395561__)
2018-01-31 11:46:03 108-0: full send of cadzfs/VM/vm-108-disk-2@__replicate_108-0_1517395561__ estimated size is 4.02G
2018-01-31 11:46:03 108-0: total estimated size is 4.02G
2018-01-31 11:46:03 108-0: TIME SENT SNAPSHOT
2018-01-31 11:46:03 108-0: cadzfs/VM/vm-108-disk-2 name cadzfs/VM/vm-108-disk-2 -
2018-01-31 11:46:03 108-0: volume 'cadzfs/VM/vm-108-disk-2' already exists
2018-01-31 11:46:03 108-0: warning: cannot send 'cadzfs/VM/vm-108-disk-2@__replicate_108-0_1517395561__': signal received
2018-01-31 11:46:03 108-0: cannot send 'cadzfs/VM/vm-108-disk-2': I/O error
2018-01-31 11:46:03 108-0: command 'zfs send -Rpv -- cadzfs/VM/vm-108-disk-2@__replicate_108-0_1517395561__' failed: exit code 1
2018-01-31 11:46:03 108-0: delete previous replication snapshot '__replicate_108-0_1517395561__' on zfs-cadstorage-VM:vm-108-disk-2
2018-01-31 11:46:04 108-0: end replication job with error: command 'set -o pipefail && pvesm export zfs-cadstorage-VM:vm-108-disk-2 zfs - -with-snapshots 1 -snapshot __replicate_108-0_1517395561__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=node-2' root@10.12.1.6 -- pvesm import zfs-cadstorage-VM:vm-108-disk-2 zfs - -with-snapshots 1' failed: exit code 255

I am forced to remove all replication tasks to node-2 and recreate them to node-1 so that it works again !

Anybody else have this problem ?

Thanks
 
Hello Everybody,

Today again today an error with another VM following the shutdown of this one for a maintenance.
When I restarted it then, the replication fails with this error below:

Code:
2018-02-07 13:38:01 107-0: start replication job
2018-02-07 13:38:01 107-0: guest => VM 107, running => 26269
2018-02-07 13:38:01 107-0: volumes => zfs-cadstorage-VM:vm-107-disk-1
2018-02-07 13:38:01 107-0: create snapshot '__replicate_107-0_1518007081__' on zfs-cadstorage-VM:vm-107-disk-1
2018-02-07 13:38:02 107-0: full sync 'zfs-cadstorage-VM:vm-107-disk-1' (__replicate_107-0_1518007081__)
2018-02-07 13:38:02 107-0: full send of cadzfs/VM/vm-107-disk-1@__replicate_107-0_1518007081__ estimated size is 49.1G
2018-02-07 13:38:02 107-0: total estimated size is 49.1G
2018-02-07 13:38:02 107-0: TIME SENT SNAPSHOT
2018-02-07 13:38:02 107-0: cadzfs/VM/vm-107-disk-1 name cadzfs/VM/vm-107-disk-1 -
2018-02-07 13:38:02 107-0: volume 'cadzfs/VM/vm-107-disk-1' already exists
2018-02-07 13:38:02 107-0: warning: cannot send 'cadzfs/VM/vm-107-disk-1@__replicate_107-0_1518007081__': signal received
2018-02-07 13:38:03 107-0: cannot send 'cadzfs/VM/vm-107-disk-1': I/O error
2018-02-07 13:38:03 107-0: command 'zfs send -Rpv -- cadzfs/VM/vm-107-disk-1@__replicate_107-0_1518007081__' failed: exit code 1
2018-02-07 13:38:03 107-0: delete previous replication snapshot '__replicate_107-0_1518007081__' on zfs-cadstorage-VM:vm-107-disk-1
2018-02-07 13:38:03 107-0: end replication job with error: command 'set -o pipefail && pvesm export zfs-cadstorage-VM:vm-107-disk-1 zfs - -with-snapshots 1 -snapshot __replicate_107-0_1518007081__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=node-2' root@10.12.1.6 -- pvesm import zfs-cadstorage-VM:vm-107-disk-1 zfs - -with-snapshots 1' failed: exit code 255

Does anyone have the same problem I mentioned above ?

Thanks
 
Last edited:
Hello to all,
I'm experiencing the same issue/error. It would be very very good (I'll be very very glad :) to make this scenario stable because I think in this lays little future of virtualization redundancy . More than this I really do not need. It is more often that the customer has (just) "two servers" instead of "two servers with one good shared storage". Syncing two servers in so elegant way (zfs storage replication) is really promising . I'm still testing this feature in my lab that consists of 3 hosts with zfspool on each one. Every host hosts one zfs-VM that replicates to another two hosts. When everything is ok, everything is perfect :) but when simulating disaster (suden host hard reset, suden reboot, split-brain(s) etc ) one of this 6 replication jobs goes into this (described above) error state and I could not recover from it. This is not big issue when VM is 10G big, but when we are talking about Terra(s) than it becomes pretty big issue if replication must start as FULL one. So the question is : Is there any way to preserve this (existing) volume that already exists as slave one in order that only delta replication occurs? If not, how can we overcome this issue ? I recreated repl job but this error happens again. Maybe we should delete this volume ?

I really count on this feature :)

Many thanks in advance
Best regards
Tonci
 
Ok . I confirm following: after deleting this vm-volume on target zfs volume full sync starts automatically without recreating sync-job . Unfortunately this could not be an option. If you proxmox team make this stable and reliable , it will be something that no other virtualization-brand has :)
Thank you in advance!!!
BR
Tonci
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!