Just to make sure I understand this correctly:
If I remove all the HA-configured LXC/KVM settings (I have DNS servers, video recorders, etc) and make them stand alone no-failover configs, it won't fence if Corosync gets unhappy? (That doesn't seem to ring true to me in a shared storage world.)
Thanks, gave it some thought, and changed the priorities a bit - we'll see if it does better than it has in the past. (Also has me thinking about things that could lower the latency between nodes, like MTU on the ring interfaces)
It would be nice to gather raw data on keepalives across all...
Sorry to necro this thread, but it's one of *many* that come up with this title, and it's directly to the core issue.
Proxmox needs to have an configurable option for behavior on Fencing. Rebooting an entire cluster upon the loss of a networking element is the sledgehammer, and we need the scalpel.
Thank you, fantastic information, already used it to clean things up a bit.
Not if PMX thinks we need to reboot. So far, none of the failures have taken down CEPH, it's pmx/HA that gets offended. (Ironic, because corosync/totem has (4) rings, and CEPH sits on a single vlan, but I digress)
That makes sense, thank you, I didn't understand the corosync/totem/cluster-manager inter-op. (Is this written up anywhere I can digest?)
I'll drop the timeouts back to default values. Since I know how to cause the meltdown, it will be easy to test the results of the change.
How would you...
RE: pmx2 - Good catch - no that wasn't intentional, fixing it already. From a network standpoint 198.18.50-53.xxx can all ping each other, so the network pieces, yes were all operational. Based on the config however, it looks like pmx2 wasn't on ring2 correctly. That in and of itself shouldn't...
Here's what the same event looked like from pmx4 (node 3)
Oct 03 23:17:58 pmx4 corosync: [TOTEM ] Token has not been received in 4687 ms
Oct 03 23:17:58 pmx4 corosync: [KNET ] link: host: 6 link: 0 is down
Oct 03 23:17:58 pmx4 corosync: [KNET ] link: host: 6 link: 1 is...
For reference, from a topology standpoint, pmx1/2/3/4/5 (nodes 6,5,4,3,2) sit in the same rack, whereas pmx6/7 (nodes 1,7) sit in another room, connected to different switches with shared infra between.
pve-manager/7.2-11/b76d3178 (running kernel: 5.15.39-3-pve)
Here's logs from a (7) node cluster, this is from node 1 - I'm notice that there's nothing in the logs that explicitly say "hey, we've failed, I'm rebooting" so I hope this makes sense to you @fabian .
I read this as "lost 0, lost 1, 2 is fine, we shuffle a bit to make 2 happy, then pull the...
Will dig them out tonight, thanks.
From an operational standpoint, is there any way to tweak the behavior of fencing?
For example, this cluster has CEPH, and as long as CEPH is happy, I'm fine with all the VM's being shut down, but by all means, don't ()*@#$ reboot!!! It's easily 20 minutes...
Someone please explain to me why the loss of a single ring should force the entire cluster (9 hosts) to reboot?
Topology - isn't 4 rings enough??
ring0_addr: 10.4.5.0/24 -- eth0/bond0 - switch1 (1ge)
ring1_addr: 198.18.50.0/24 -- eth1/bond1 - switch2 (1ge)
This solved my problem as well - thank you.
Somewhere there should be a short "Admin HowTo" list, because this would be part of a document labeled: "How to replace a ZFS boot disk in a mirror set for Proxmox"
(To be fair - this : https://pve.proxmox.com/pve-docs/chapter-sysadmin.html - is a...