Hello together,
I'd like to build a PVE HA Cluster out of 2 PVE Nodes and 1 QDev to get quorum.
In order to get a nice and stable Corosync Link, I've a dedicated 1G NIC via Crossover LAN between the 2 PVE nodes.
The QDev VM is a external hosted system and can't be connected via LAN.
The Plan:
* Using Link0 (Primary) for Corosync between 2PVE nodes and 1 QDev over WAN (WireGuard VPN Mesh)
* Using Link1 (Backup) for Corosync between 2PVE nodes over 1G LAN (directly connected)
That way I'd have a 3 node corosync cluster and if the WAN Interface gets accidentally too busy (BWLimits in place, but DDoS, etc..), it will temporarily switch to the Backup link.
This would be the corosync.conf sample, I've come up with:
The question:
* Is this (Link0 3 Nodes, Link1 2 Nodes) setup supported by Corosync?
* Alternativ, I could span an additional WireGuard Mesh VPN between all three nodes, but this time, I'd use the 1G NIC IPs as Endpoints for the WireGuard connection between the 2 Nodes. This way I could get all three nodes into the same subnet.
* Would you recommend to use the 1G NiC as Link1 or as Link0 in that case?
I hope it is understandable what I like to accomplish.
Thanks in advance!
Kind regards,
fr1000
I'd like to build a PVE HA Cluster out of 2 PVE Nodes and 1 QDev to get quorum.
In order to get a nice and stable Corosync Link, I've a dedicated 1G NIC via Crossover LAN between the 2 PVE nodes.
The QDev VM is a external hosted system and can't be connected via LAN.
The Plan:
* Using Link0 (Primary) for Corosync between 2PVE nodes and 1 QDev over WAN (WireGuard VPN Mesh)
* Using Link1 (Backup) for Corosync between 2PVE nodes over 1G LAN (directly connected)
That way I'd have a 3 node corosync cluster and if the WAN Interface gets accidentally too busy (BWLimits in place, but DDoS, etc..), it will temporarily switch to the Backup link.
This would be the corosync.conf sample, I've come up with:
Code:
totem {
version: 2
secauth: on
cluster_name: cluster-prod
transport: udpu
# Hauptverbindung (QDev, PV1, PV2)
interface {
ringnumber: 0
bindnetaddr: <Hauptverbindungsnetzwerk>
mcastport: 5405
ttl: 1
}
# Backup-Verbindung (nur PV1 und PV2)
interface {
ringnumber: 1
bindnetaddr: <Backupverbindungsnetzwerk>
mcastport: 5405
ttl: 1
}
}
nodelist {
node {
ring0_addr: <IP-Adresse-PV1-Hauptverbindung>
ring1_addr: <IP-Adresse-PV1-Backupverbindung>
nodeid: 1
}
node {
ring0_addr: <IP-Adresse-PV2-Hauptverbindung>
ring1_addr: <IP-Adresse-PV2-Backupverbindung>
nodeid: 2
}
node {
ring0_addr: <IP-Adresse-QDev-Hauptverbindung>
nodeid: 3
}
}
quorum {
provider: corosync_votequorum
device {
model: net
votes: 1
net {
host: <IP-Adresse-QDev-Hauptverbindung>
algorithm: ffsplit
}
}
}
The question:
* Is this (Link0 3 Nodes, Link1 2 Nodes) setup supported by Corosync?
* Alternativ, I could span an additional WireGuard Mesh VPN between all three nodes, but this time, I'd use the 1G NIC IPs as Endpoints for the WireGuard connection between the 2 Nodes. This way I could get all three nodes into the same subnet.
* Would you recommend to use the 1G NiC as Link1 or as Link0 in that case?
I hope it is understandable what I like to accomplish.
Thanks in advance!
Kind regards,
fr1000