HA cluster on 2 servers with ZFS

asdrojd

New Member
Nov 10, 2020
7
0
1
41
OS Virtual Environment 6.3-2
heve 2 edentical servers
3 hdd on each /dev/sda /dev/sdb /dev/sdc
/dev/sda for proxmox system
/dev/sdb /dev/sdc for zfs

on each was configured zfs mirror
root@host2:/etc/network# zpool status -v
pool: zfs
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
zfs ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0

errors: No known data errors

was created zfs_pool for replication between servers
was created HA cluster, also HA group with both servers(restricted:yes nofailback:no)
created virtual server on node1(hdd on virtual server is on ZFS on node1)
virtual server from node1 was migrated on node2 (manually) -it works ok

next i want to test
node2 server was switched off
question how to start virtual server on node1 or automatically on manually
replication was several minutes ago and i undestood about loosing fresh information....

how to start virtual server now on node1 (in that time node2 is down, but i have replication on node1)
can not find anywhere
on node 1 i have(copy of) config of virtual server in /etc/pve/nodes/host2/qemu-server/100.conf
but i can't it copy to /etc/pve/nodes/host1/qemu-server

what to do next?
 
Ask your self, do you have quorum, with 2 node cluster, when one node dies?
the question is what to do next?
ok automatic not working how to start manual (as i said before, i undestood about loosing info after replication) but i need to start this virtual server
 
after 2 days of searching the clear responce is
on live node start command
pvecm expected 1

and after several minutes virtual server will be automatically moved to live node and started
(if this virtual server previously was configured in HA->resources)
 
pvecm expected 1
Instead of playing around with those settings I highly suggest taking a look at the qdevice mechanism which can provide a third vote to the cluster.

See the documentation: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_corosync_external_vote_support

TL;DR: install the corosync-qnetd service on some machine that is not in the cluster and configure it in the cluster. It will give you a third vote and if one node is down you still hold 2 out of 3 and thus have the majority.
 
  • Like
Reactions: mailinglists

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!