Connection issue with a host

DamienB

Member
May 4, 2020
15
0
21
36
Hello everyone,

Thanks by advance for reading my post.

I have a cluster of 4 hosts(pve v8.2.3):
PX1|2|3 in OVH DC( Gravelines)
PX4-PRA in OVH DC (Roubaix)

The last one is seen in the web UI as degraded:

1725528718910.png
I was looking for the problem when I saw an error in corosync service:
[KNET ] host: host: 1 has no active links

However after restarting the service, this message disappeared. I successfuly ping the interfaces of all host:
px4-pra => px1
=> px2
=> px3

Code:
$ fping -qag 192.168.199.11/24
192.168.199.11
192.168.199.12
192.168.199.13
192.168.199.14

I also restarted the pveproxy to be sure but In the web UI nothing changed.
Here are some other commands:

Code:
$ pvecm nodes

Membership information
----------------------
    Nodeid      Votes Name
         1          1 px1
         2          1 px3
         4          1 px2
         5          1 px4-pra (local)

Code:
$ pvecm status

Cluster information
-------------------
Name:             dawan
Config Version:   12
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Thu Sep  5 09:41:50 2024
Quorum provider:  corosync_votequorum
Nodes:            4
Node ID:          0x00000005
Ring ID:          1.404b
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   4
Highest expected: 4
Total votes:      4
Quorum:           3 
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 192.168.199.11
0x00000002          1 192.168.199.13
0x00000004          1 192.168.199.12
0x00000005          1 192.168.199.14 (local)

Anyone have an idea?
 

Attachments

  • 1725524126451.png
    1725524126451.png
    10.5 KB · Views: 1
Hi,

Can you please check the syslog in your nodes especially the `px4-pra` node. Could you also please share the corosync config and the network config and the output of `ip a` command from the `px4-pra`?
 
The corosync config file:
Code:
logging {                       
  debug: off                   
  to_syslog: yes               
}                               
                                
nodelist {                     
  node {                       
    name: px1                   
    nodeid: 1                   
    quorum_votes: 1             
    ring0_addr: 192.168.199.11 
  }                             
  node {                       
    name: px2                   
    nodeid: 4                   
    quorum_votes: 1             
    ring0_addr: 192.168.199.12 
  }                             
  node {                       
    name: px3                   
    nodeid: 2                   
    quorum_votes: 1             
    ring0_addr: 192.168.199.13 
  }                             
  node {                       
    name: px4-pra               
    nodeid: 5                   
    quorum_votes: 1             
    ring0_addr: 192.168.199.14 
  }                             
}                               
                                
quorum {                       
  provider: corosync_votequorum
}                               
                                
totem {                         
  cluster_name: dawan           
  config_version: 12           
  interface {                   
    linknumber: 0               
  }                             
  ip_version: ipv4-6           
  link_mode: passive           
  secauth: on                   
  version: 2                   
}

The interface:
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000                                             
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00                                                                               
    inet 127.0.0.1/8 scope host lo                                                                                                     
       valid_lft forever preferred_lft forever                                                                                         
    inet6 ::1/128 scope host noprefixroute                                                                                             
       valid_lft forever preferred_lft forever                                                                                         
2: public0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000                           
    link/ether a8:a1:59:c0:ef:1b brd ff:ff:ff:ff:ff:ff                                                                                 
    altname enp66s0f0                                                                                                                   
3: enx02f61e11a8a2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000                                       
    link/ether 02:f6:1e:11:a8:a2 brd ff:ff:ff:ff:ff:ff                                                                                 
4: private0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr2 state UP group default qlen 1000                         
    link/ether a8:a1:59:c0:ef:1c brd ff:ff:ff:ff:ff:ff                                                                                 
    altname enp66s0f1                                                                                                                   
5: private0.9@private0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000                       
    link/ether a8:a1:59:c0:ef:1c brd ff:ff:ff:ff:ff:ff                                                                                 
    inet 192.168.199.14/24 scope global private0.9                                                                                     
       valid_lft forever preferred_lft forever                                                                                         
    inet6 fe80::aaa1:59ff:fec0:ef1c/64 scope link                                                                                       
       valid_lft forever preferred_lft forever                                                                                         
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000                                     
    link/ether a8:a1:59:c0:ef:1b brd ff:ff:ff:ff:ff:ff                                                                                 
    inet 162.19.61.226/24 brd 162.19.61.255 scope global vmbr0                                                                         
       valid_lft forever preferred_lft forever                                                                                         
    inet6 fe80::aaa1:59ff:fec0:ef1b/64 scope link                                                                                       
       valid_lft forever preferred_lft forever                                                                                         
7: private0.202@private0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr202 state UP group default qlen 1000     
    link/ether a8:a1:59:c0:ef:1c brd ff:ff:ff:ff:ff:ff                                                                                 
8: vmbr202: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000                                   
    link/ether a8:a1:59:c0:ef:1c brd ff:ff:ff:ff:ff:ff                                                                                 
    inet6 fe80::aaa1:59ff:fec0:ef1c/64 scope link                                                                                       
       valid_lft forever preferred_lft forever                                                                                         
9: private0.201@private0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr201 state UP group default qlen 1000     
    link/ether a8:a1:59:c0:ef:1c brd ff:ff:ff:ff:ff:ff                                                                                 
10: vmbr201: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000                                 
    link/ether a8:a1:59:c0:ef:1c brd ff:ff:ff:ff:ff:ff                                                                                 
    inet6 fe80::aaa1:59ff:fec0:ef1c/64 scope link                                                                                       
       valid_lft forever preferred_lft forever                                                                                         
11: private0.200@private0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr200 state UP group default qlen 1000     
    link/ether a8:a1:59:c0:ef:1c brd ff:ff:ff:ff:ff:ff                                                                                 
12: vmbr200: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000                                 
    link/ether a8:a1:59:c0:ef:1c brd ff:ff:ff:ff:ff:ff                                                                                 
    inet6 fe80::aaa1:59ff:fec0:ef1c/64 scope link                                                                                       
       valid_lft forever preferred_lft forever                                                                                         
13: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000                                   
    link/ether a8:a1:59:c0:ef:1c brd ff:ff:ff:ff:ff:ff                                                                                 
    inet6 fe80::aaa1:59ff:fec0:ef1c/64 scope link                                                                                       
       valid_lft forever preferred_lft forever                                                                                         
14: tap229i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr229i0 state UNKNOWN group default qlen 1000
    link/ether a2:98:c4:6a:3b:7d brd ff:ff:ff:ff:ff:ff                                                                                 
15: fwbr229i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000                               
    link/ether 1e:ba:08:d8:40:7c brd ff:ff:ff:ff:ff:ff                                                                                 
16: fwpr229p0@fwln229i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr2 state UP group default qlen 1000         
    link/ether f2:e6:f7:f9:14:37 brd ff:ff:ff:ff:ff:ff                                                                                 
17: fwln229i0@fwpr229p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr229i0 state UP group default qlen 1000     
    link/ether 1e:ba:08:d8:40:7c brd ff:ff:ff:ff:ff:ff                                                                                 
18: tap252i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000   
    link/ether 66:d5:0f:13:d0:9b brd ff:ff:ff:ff:ff:ff                                                                                 
19: tap252i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr252i1 state UNKNOWN group default qlen 1000
    link/ether 76:2f:bc:7f:de:fb brd ff:ff:ff:ff:ff:ff                                                                                 
20: fwbr252i1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000                               
    link/ether 7e:cc:f4:de:17:b6 brd ff:ff:ff:ff:ff:ff                                                                                 
21: fwpr252p1@fwln252i1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr201 state UP group default qlen 1000       
    link/ether 3a:a7:1a:cd:5d:4b brd ff:ff:ff:ff:ff:ff                                                                                 
22: fwln252i1@fwpr252p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr252i1 state UP group default qlen 1000     
    link/ether 7e:cc:f4:de:17:b6 brd ff:ff:ff:ff:ff:ff
 
From syslog:
Code:
$ grep -i px4-pra syslog|grep -v postfix

2024-09-05T08:20:58.121241+00:00 px4-pra pmxcfs[1584]: [dcdb] notice: members: 1/1752, 2/2007, 4/1993, 5/1584                                                              
2024-09-05T08:20:58.121512+00:00 px4-pra pmxcfs[1584]: [dcdb] notice: starting data syncronisation                                                                         
2024-09-05T08:20:58.123161+00:00 px4-pra pmxcfs[1584]: [dcdb] notice: received sync request (epoch 1/1752/0000000A)                                                        
2024-09-05T08:20:58.123828+00:00 px4-pra pmxcfs[1584]: [status] notice: members: 1/1752, 2/2007, 4/1993, 5/1584                                                            
2024-09-05T08:20:58.123872+00:00 px4-pra pmxcfs[1584]: [status] notice: starting data syncronisation                                                                       
2024-09-05T08:20:58.125915+00:00 px4-pra pmxcfs[1584]: [status] notice: received sync request (epoch 1/1752/0000000A)                                                      
2024-09-05T08:20:58.128453+00:00 px4-pra pmxcfs[1584]: [dcdb] notice: received all states                                                                                  
2024-09-05T08:20:58.128497+00:00 px4-pra pmxcfs[1584]: [dcdb] notice: leader is 2/2007                                                                                     
2024-09-05T08:20:58.128536+00:00 px4-pra pmxcfs[1584]: [dcdb] notice: synced members: 2/2007, 4/1993, 5/1584                                                               
2024-09-05T08:20:58.128580+00:00 px4-pra pmxcfs[1584]: [dcdb] notice: all data is up to date                                                                               
2024-09-05T08:20:58.136971+00:00 px4-pra pmxcfs[1584]: [status] notice: received all states                                                                                
2024-09-05T08:20:58.137296+00:00 px4-pra pmxcfs[1584]: [status] notice: all data is up to date                                                                             
2024-09-05T08:21:05.399510+00:00 px4-pra pve-ha-crm[1939]: node 'px1': state changed from 'unknown' => 'online'                                                            
2024-09-05T08:25:01.992483+00:00 px4-pra pmxcfs[1584]: [status] notice: received log                                                                                       
2024-09-05T08:31:27.151753+00:00 px4-pra pmxcfs[1584]: [dcdb] notice: data verification successful                                                                         
2024-09-05T08:40:02.503078+00:00 px4-pra pmxcfs[1584]: [status] notice: received log                                                                                       
2024-09-05T08:45:02.830278+00:00 px4-pra systemd[1]: Starting apt-daily.service - Daily apt download activities...                                                         
2024-09-05T08:45:03.144434+00:00 px4-pra systemd[1]: apt-daily.service: Deactivated successfully.                                                                          
2024-09-05T08:45:03.144731+00:00 px4-pra systemd[1]: Finished apt-daily.service - Daily apt download activities.                                                           
2024-09-05T08:55:03.517803+00:00 px4-pra pmxcfs[1584]: [status] notice: received log                                                                                       
2024-09-05T09:04:45.487304+00:00 px4-pra pveproxy[348979]: worker exit                                                                                                     
2024-09-05T09:04:45.511131+00:00 px4-pra pveproxy[228774]: worker 348979 finished                                                                                          
2024-09-05T09:04:45.511245+00:00 px4-pra pveproxy[228774]: starting 1 worker(s)                                                                                            
2024-09-05T09:04:45.514587+00:00 px4-pra pveproxy[228774]: worker 442695 started                                                                                           
2024-09-05T09:17:01.683845+00:00 px4-pra CRON[444667]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)                                                            
2024-09-05T09:20:48.223879+00:00 px4-pra pmxcfs[1584]: [status] notice: received log                                                                                       
2024-09-05T09:28:38.589505+00:00 px4-pra pmxcfs[1584]: [status] notice: received log                                                                                       
2024-09-05T09:31:27.367936+00:00 px4-pra pmxcfs[1584]: [dcdb] notice: data verification successful                                                                         
2024-09-05T09:33:04.900523+00:00 px4-pra systemd[1]: Created slice user-0.slice - User Slice of UID 0.                                                                     
2024-09-05T09:33:04.913105+00:00 px4-pra systemd[1]: Starting user-runtime-dir@0.service - User Runtime Directory /run/user/0...                                           
2024-09-05T09:33:04.919656+00:00 px4-pra systemd[1]: Finished user-runtime-dir@0.service - User Runtime Directory /run/user/0.                                             
2024-09-05T09:33:04.920977+00:00 px4-pra systemd[1]: Starting user@0.service - User Manager for UID 0...                                                                   
2024-09-05T09:33:05.074195+00:00 px4-pra systemd[447252]: Queued start job for default target default.target.                                                              
2024-09-05T09:33:05.089110+00:00 px4-pra systemd[447252]: Created slice app.slice - User Application Slice.                                                                
2024-09-05T09:33:05.089230+00:00 px4-pra systemd[447252]: Reached target paths.target - Paths.                                                                             
2024-09-05T09:33:05.089303+00:00 px4-pra systemd[447252]: Reached target timers.target - Timers.                                                                           
2024-09-05T09:33:05.089977+00:00 px4-pra systemd[447252]: Starting dbus.socket - D-Bus User Message Bus Socket...                                                          
2024-09-05T09:33:05.090090+00:00 px4-pra systemd[447252]: Listening on dirmngr.socket - GnuPG network certificate management daemon.                                       
2024-09-05T09:33:05.090145+00:00 px4-pra systemd[447252]: Listening on gpg-agent-browser.socket - GnuPG cryptographic agent and passphrase cache (access for web browsers).
2024-09-05T09:33:05.090218+00:00 px4-pra systemd[447252]: Listening on gpg-agent-extra.socket - GnuPG cryptographic agent and passphrase cache (restricted).               
2024-09-05T09:33:05.090308+00:00 px4-pra systemd[447252]: Listening on gpg-agent-ssh.socket - GnuPG cryptographic agent (ssh-agent emulation).                             
2024-09-05T09:33:05.090350+00:00 px4-pra systemd[447252]: Listening on gpg-agent.socket - GnuPG cryptographic agent and passphrase cache.                                  
2024-09-05T09:33:05.095534+00:00 px4-pra systemd[447252]: Listening on dbus.socket - D-Bus User Message Bus Socket.                                                        
2024-09-05T09:33:05.095589+00:00 px4-pra systemd[447252]: Reached target sockets.target - Sockets.                                                                         
2024-09-05T09:33:05.095638+00:00 px4-pra systemd[447252]: Reached target basic.target - Basic System.                                                                      
2024-09-05T09:33:05.095687+00:00 px4-pra systemd[447252]: Reached target default.target - Main User Target.                                                                
2024-09-05T09:33:05.095721+00:00 px4-pra systemd[447252]: Startup finished in 167ms.                                                                                       
2024-09-05T09:33:05.095773+00:00 px4-pra systemd[1]: Started user@0.service - User Manager for UID 0.                                                                      
2024-09-05T09:33:05.096786+00:00 px4-pra systemd[1]: Started session-953.scope - Session 953 of User root.                                                                 
2024-09-05T09:43:38.522861+00:00 px4-pra pmxcfs[1584]: [status] notice: received log                                                                                       
2024-09-05T09:54:57.470445+00:00 px4-pra pveproxy[348981]: worker exit                                                                                                     
2024-09-05T09:54:57.495049+00:00 px4-pra pveproxy[228774]: worker 348981 finished                                                                                          
2024-09-05T09:54:57.495126+00:00 px4-pra pveproxy[228774]: starting 1 worker(s)                                                                                            
2024-09-05T09:54:57.499588+00:00 px4-pra pveproxy[228774]: worker 450917 started
 
Since that the `px4-pra` is in a different data center, there could be intermittent network latency or packet loss etc... causing Corosync. I would check the network stability between the nodes, and check if you have a rule for the Firewall.
 
Thanks for replying.

Currently I have a good latence but nothing change in the web UI:
Code:
$ ping -c 200 -i 0.2 px1

64 bytes from 192.168.199.11: icmp_seq=194 ttl=64 time=1.74 ms
64 bytes from 192.168.199.11: icmp_seq=195 ttl=64 time=1.89 ms
64 bytes from 192.168.199.11: icmp_seq=196 ttl=64 time=1.77 ms
64 bytes from 192.168.199.11: icmp_seq=197 ttl=64 time=1.77 ms
64 bytes from 192.168.199.11: icmp_seq=198 ttl=64 time=1.89 ms
64 bytes from 192.168.199.11: icmp_seq=199 ttl=64 time=1.87 ms
64 bytes from 192.168.199.11: icmp_seq=200 ttl=64 time=1.78 ms

--- px1.dawan.fr ping statistics ---
200 packets transmitted, 200 received, 0% packet loss, time 20099ms
rtt min/avg/max/mdev = 1.731/1.848/2.071/0.073 ms

I don't have a special rule in the firewall for this subnet.
I realize that I can't connect to the node's web interface.
I just overwrite the password for the adminpve account but it still refusing! I'm really confused.

1725532313105.png
 
When I try to log in the px-pra web UI, I got this log message(pveproxy.service):
proxy detected vanished client connection
 
To know if all networks are reachable you can use:

Bash:
corosync-cfgtool -n

Are you sure you login with right Realm i.e., PVE or PAM?
 
Again, thanks for helping me.

Code:
$ corosync-cfgtool -n

Local node ID 5, transport knet
nodeid: 1 reachable
   LINK: 0 udp (192.168.199.14->192.168.199.11) enabled connected mtu: 1397

nodeid: 2 reachable
   LINK: 0 udp (192.168.199.14->192.168.199.13) enabled connected mtu: 1397

nodeid: 4 reachable
   LINK: 0 udp (192.168.199.14->192.168.199.12) enabled connected mtu: 1397

The both Realm(pve, PAM) are ending with the same error:
proxy detected vanished client connection

If I repeat the process in px1 it works. To be sure I overwrited the password.
 
Last edited:
Thank you for the output!

From the output of the corosync-cfgtool I don't see the IP of the px4-pra, is that full output? if yes - can you please check if the Corosync config is the same on all your 4 nodes? Can you also try to restart the corosync service and then provide us with the status output?
Bash:
systemctl restart corosync.service
systemctl status corosync.service


The both Realm(pve, PAM) are ending with the same error:
Can you please check if the /etc/pve mounted? you can check using `mount | grep '/etc/pve'` or ls /etc/pve commands
 
Thank you for the output!

From the output of the corosync-cfgtool I don't see the IP of the px4-pra, is that full output? if yes - can you please check if the Corosync config is the same on all your 4 nodes? Can you also try to restart the corosync service and then provide us with the status output?
Bash:
systemctl restart corosync.service
systemctl status corosync.service
...

Ooops. My bad! I just correct my latest post(192.168.199.15 192.168.199.14). Yes the IP is the same in /etc/pve/corosync.conf
After restarting the corosync service:

Code:
● corosync.service - Corosync Cluster Engine
     Loaded: loaded (/lib/systemd/system/corosync.service; enabled; preset: enabled)
     Active: active (running) since Thu 2024-09-05 15:06:07 CEST; 13s ago
       Docs: man:corosync
             man:corosync.conf
             man:corosync_overview
   Main PID: 483160 (corosync)
      Tasks: 9 (limit: 309276)
     Memory: 134.8M
        CPU: 298ms
     CGroup: /system.slice/corosync.service
             └─483160 /usr/sbin/corosync -f

Sep 05 15:06:10 px4-pra corosync[483160]:   [KNET  ] link: Resetting MTU for link 0 because host 1 joined
Sep 05 15:06:10 px4-pra corosync[483160]:   [KNET  ] host: host: 1 (passive) best link: 0 (pri: 1)
Sep 05 15:06:10 px4-pra corosync[483160]:   [KNET  ] pmtud: PMTUD link change for host: 1 link: 0 from 469 to 1397
Sep 05 15:06:10 px4-pra corosync[483160]:   [KNET  ] pmtud: Global data MTU changed to: 1397
Sep 05 15:06:10 px4-pra corosync[483160]:   [QUORUM] Sync members[4]: 1 2 4 5
Sep 05 15:06:10 px4-pra corosync[483160]:   [QUORUM] Sync joined[3]: 1 2 4
Sep 05 15:06:10 px4-pra corosync[483160]:   [TOTEM ] A new membership (1.4076) was formed. Members joined: 1 2 4
Sep 05 15:06:10 px4-pra corosync[483160]:   [QUORUM] This node is within the primary component and will provide service.
Sep 05 15:06:10 px4-pra corosync[483160]:   [QUORUM] Members[4]: 1 2 4 5
Sep 05 15:06:10 px4-pra corosync[483160]:   [MAIN  ] Completed service synchronization, ready to provide service.

Can you please check if the /etc/pve mounted? you can check using `mount | grep '/etc/pve'` or ls /etc/pve commands

Yes it is:

Code:
$ mount | grep '/etc/pve'
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)

Code:
$ ls -l /etc/pve/nodes/
drwxr-xr-x 2 root www-data 0 Mar 22  2021 px1
drwxr-xr-x 2 root www-data 0 Oct 17  2022 px2
drwxr-xr-x 2 root www-data 0 Mar 23  2021 px3
drwxr-xr-x 2 root www-data 0 Feb 28  2023 px4-pra

To be sure: In order to overwrite the password of adminpve I do this:

Code:
$ pveum passwd adminpve@pve
Enter new password: **************************
Retype new password: **************************

1725541185343.png

After ~ 15s I got this message: Login failed. Please try again. I also tried adminpve@pve.
 
I found something: The IP used by PX4-pra in /etc/pve/.members was different than the other server(wrong subnet).
I change this by updating /etc/hosts and I restarted the host.
For now it works but I still cannot loggin the web UI of PX4-pra.
 
For now it works but I still cannot loggin the web UI of PX4-pra.
I'm glad to hear that you fix the first issue yourself! Regarding the loggin I would run `journalctl -f` on the shell and see what it prints during the trying login.
 
Today, I tried again to connect in the web UI of px4-pra and ... it works.
This situation is frustrated. It give me the feelings that I could got the same problem and I wouldn't know how to correct it.
From the log I no longer have the error message "The proxy detected a missing client connection".

Thank you Moayad for your help! Have a good day.
 
Hello,

Unfortunately I come back again for the same reason:

Since 11 hours this morning, I got this:

1725876277307.png


I checked the corosync and pveproxy services, and the syslog but this time no issue/warning detected.
Code:
$ corosync-cfgtool -n
Local node ID 5, transport knet
nodeid: 1 reachable
   LINK: 0 udp (192.168.199.14->192.168.199.11) enabled connected mtu: 1397

nodeid: 2 reachable
   LINK: 0 udp (192.168.199.14->192.168.199.13) enabled connected mtu: 1397

nodeid: 4 reachable
   LINK: 0 udp (192.168.199.14->192.168.199.12) enabled connected mtu: 1397

I'm also checking the latency but for now, nothin wrong. Max 2.1 ms. Avg 1.8 ms.

It's really weird. I'm quite sure, if I reboot it will correct the problem. But I cannot because the VMs up are still available on the host.

Any idea?
 

Attachments

  • 1725876251768.png
    1725876251768.png
    138 KB · Views: 1