ZFS Replication + HA Failover

harvie

Well-Known Member
Apr 5, 2017
138
22
58
34
I have testing cluster with 3 nodes of PVE 5.0, i've managed to setup ZFS replication and HA.

1.) HA Failover was not working unless i created HA group and put my CT in it (originaly i was thinking that any running node will be used when no group is assigned to resource)

2.) When i manualy migrated CT from node1 to node2 the replacation changed direction from node1->node2 to node2->node1. This is cool. But when HA failover finaly took place after node1 failure and started my replica on node2 the direction was not changed. Now it's not even possible to migrate the CT back to node where it was before HA failover was executed.


This is output:


2017-10-11 09:38:07 shutdown CT 100
2017-10-11 09:38:07 # lxc-stop -n 100 --timeout 180
2017-10-11 09:38:12 # lxc-wait -n 100 -t 5 -s STOPPED
2017-10-11 09:38:12 starting migration of CT 100 to node 'virt1' (10.11.56.141)
2017-10-11 09:38:12 found local volume 'vps:subvol-100-disk-1' (in current VM config)
send from @ to tank/vps/subvol-100-disk-1@__replicate_100-0_1507705201__ estimated size is 437M
send from @__replicate_100-0_1507705201__ to tank/vps/subvol-100-disk-1@__migration__ estimated size is 1.27M
total estimated size is 438M
tank/vps/subvol-100-disk-1 name tank/vps/subvol-100-disk-1 -
volume 'tank/vps/subvol-100-disk-1' already exists
TIME SENT SNAPSHOT
command 'zfs send -Rpv -- tank/vps/subvol-100-disk-1@__migration__' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2017-10-11 09:38:14 ERROR: command 'set -o pipefail && pvesm export vps:subvol-100-disk-1 zfs - -with-snapshots 0 -snapshot __migration__ | /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=virt1' root@10.11.56.141 -- pvesm import vps:subvol-100-disk-1 zfs - -with-snapshots 0 -delete-snapshot __migration__' failed: exit code 255
2017-10-11 09:38:14 aborting phase 1 - cleanup resources
2017-10-11 09:38:14 ERROR: found stale volume copy 'vps:subvol-100-disk-1' on node 'virt1'
2017-10-11 09:38:14 start final cleanup
2017-10-11 09:38:14 start container on target node
2017-10-11 09:38:14 # /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=virt1' root@10.11.56.141 pct start 100
2017-10-11 09:38:15 Configuration file 'nodes/virt1/lxc/100.conf' does not exist
2017-10-11 09:38:15 ERROR: command '/usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=virt1' root@10.11.56.141 pct start 100' failed: exit code 255
2017-10-11 09:38:15 ERROR: migration aborted (duration 00:00:09): command 'set -o pipefail && pvesm export vps:subvol-100-disk-1 zfs - -with-snapshots 0 -snapshot __migration__ | /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=virt1' root@10.11.56.141 -- pvesm import vps:subvol-100-disk-1 zfs - -with-snapshots 0 -delete-snapshot __migration__' failed: exit code 255
TASK ERROR: migration aborted
 
Hi,

1.)
It can only work if you create a HA-Groupe where the storage replica is member off.
It would to costly to scan all nodes if the disk are available.

2.)
HA and Replication work but it will brake the replication.
The auto recover is not implemented yet.
 
  • Like
Reactions: jeffwadsworth
1.) Thanks for clarification. So why it is possible to add CT to HA without adding it to HA Group? Are there some other benefits beside of failover i can have from HA?

2.) I look forward for this auto recover being implemented. It would be super useful. Especialy if it was possible to enable automatic replication to all nodes in HA group without having to set it up manualy. That would extremely simplify the failover management.
 
1.) It is not possible with CT out of the box.
You only have luck that the CT moved to the correct node, where the data was replicate to.
 
That's not really an issue as i was aware that i need to replicate CT to all nodes (or at least all nodes in HA group)
 
I assume the auto-recover will involve selecting the guest replicated image on the recovery node and selecting recover similar to your backup solution. Looking forward to that also.
 
+ 1 Also looking forward this option. I'm aware that we might lost some data since the last replication occurred but we could live with it if the config is moved to the other node then restarted using the replicated disk.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!