Hello,
i just replaced the NIC's on all my 3 nodes and i upgraded to 10 Gbit's SFP+.
All nodes can access each other over ssh without any problems.
But since i replaced the NIC's im not able to access the nodes at the web GUI or at least node1 cannot access node 3 or node 2 anymore.
I only see the loading circle with the message "communication failure (0)", why that?
Is corosync somehow fuzzing around with the new MAC address here?
My cluster state seems to look fine:
pvecm status
Cluster information
-------------------
Name: cluster
Config Version: 3
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Sat Jan 18 21:16:47 2020
Quorum provider: corosync_votequorum
Nodes: 3
Node ID: 0x00000003
Ring ID: 1.2f8
Quorate: Yes
Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 200.200.200.10
0x00000002 1 200.200.200.20
0x00000003 1 200.200.200.30 (local)
lspci | grep -i eth
01:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
23:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
UPDATE:
It also seems im not able to get any "high-bandwidth" trough the adapter:
root@hv01 ~ # iperf -c 200.200.200.20 -p 5001
------------------------------------------------------------
Client connecting to 200.200.200.20, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 200.200.200.10 port 52050 connected with 200.200.200.20 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.2 sec 109 KBytes 87.3 Kbits/sec
UPDATE2:
ISSUE Solved (MTU size was not fine for the new adapter)
What can i do ?
i just replaced the NIC's on all my 3 nodes and i upgraded to 10 Gbit's SFP+.
All nodes can access each other over ssh without any problems.
But since i replaced the NIC's im not able to access the nodes at the web GUI or at least node1 cannot access node 3 or node 2 anymore.
I only see the loading circle with the message "communication failure (0)", why that?
Is corosync somehow fuzzing around with the new MAC address here?
My cluster state seems to look fine:
pvecm status
Cluster information
-------------------
Name: cluster
Config Version: 3
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Sat Jan 18 21:16:47 2020
Quorum provider: corosync_votequorum
Nodes: 3
Node ID: 0x00000003
Ring ID: 1.2f8
Quorate: Yes
Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 200.200.200.10
0x00000002 1 200.200.200.20
0x00000003 1 200.200.200.30 (local)
lspci | grep -i eth
01:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
23:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
UPDATE:
It also seems im not able to get any "high-bandwidth" trough the adapter:
root@hv01 ~ # iperf -c 200.200.200.20 -p 5001
------------------------------------------------------------
Client connecting to 200.200.200.20, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 200.200.200.10 port 52050 connected with 200.200.200.20 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.2 sec 109 KBytes 87.3 Kbits/sec
UPDATE2:
ISSUE Solved (MTU size was not fine for the new adapter)
What can i do ?
Last edited: