Hi there. Sorry for my english.
I have a cluster with redundant corosync network.
May be i don't understand something.
When i do ifdown vmbr0 on the 3 node (0x00000003 1 192.168.1.226) - corosync works fine but, i can't access to webgui, it says no route to host.

	
	
	
		
	
	
	
		
When i connect to cluster via secondary subnet 192.168.42.0 and try to open 3 node, i get the same error.
Only when i connect directly to 3 node, i can reach it.
I understand that the cluster is trying to connect by primary ip.. but it's not logical, or not ?
Is there a way to fix this ?
				
			I have a cluster with redundant corosync network.
May be i don't understand something.
When i do ifdown vmbr0 on the 3 node (0x00000003 1 192.168.1.226) - corosync works fine but, i can't access to webgui, it says no route to host.

		Code:
	
	root@prox1:~# pvecm status
Cluster information
-------------------
Name:             dant
Config Version:   10
Transport:        knet
Secure auth:      on
Quorum information
------------------
Date:             Sat Jan 29 01:52:38 2022
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000001
Ring ID:          1.27d
Quorate:          Yes
Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2
Flags:            Quorate
Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 192.168.1.225 (local)
0x00000002          1 192.168.1.19
0x00000003          1 192.168.1.226
	
		Code:
	
	root@prox1:~# cat /etc/corosync/corosync.conf
logging {
  debug: off
  to_syslog: yes
}
nodelist {
  node {
    name: prox1
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 192.168.1.225
    ring1_addr: 192.168.42.11
  }
  node {
    name: prox2
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 192.168.1.19
    ring1_addr: 192.168.42.12
  }
  node {
    name: prox3
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 192.168.1.226
    ring1_addr: 192.168.42.14
  }
}
quorum {
  provider: corosync_votequorum
}
totem {
  cluster_name: dant
  config_version: 10
  interface {
    bindnetaddr: 192.168.1.225
    linknumber: 0
    knet_link_priority: 10
  }
  interface {
    bindnetaddr: 192.168.42.11
    linknumber: 1
    knet_link_priority: 20
  }
  ip_version: ipv4-6
  rrp_mode: passive
  secauth: on
  version: 2
}
	When i connect to cluster via secondary subnet 192.168.42.0 and try to open 3 node, i get the same error.
Only when i connect directly to 3 node, i can reach it.
I understand that the cluster is trying to connect by primary ip.. but it's not logical, or not ?
Is there a way to fix this ?
			
				Last edited: