Dec 1, 2022
17
0
1
Hello Dear Proxmox Forum,

We had two Proxmox VE nodes and one tie breaker nodes. We have added two additional proxmox VE nodes. When we checked the services, we realized some warnings. HA is looking fine in GUI. But in the CLI, it seems like there is something wrong. We really don't know what will happen if any of the host is rebooted or when we decide to remove one of the hosts. Do you think this is normal ?

-----> pvecm status
Cluster information
-------------------
Name: CLUSTER
Config Version: 5
Transport: knet
Secure auth: on

Quorum information
------------------
Date: Sun Mar 12 04:10:46 2023
Quorum provider: corosync_votequorum
Nodes: 4
Node ID: 0x00000004
Ring ID: 1.da
Quorate: Yes

Votequorum information
----------------------
Expected votes: 5
Highest expected: 5
Total votes: 4
Quorum: 3
Flags: Quorate

Membership information
----------------------
Nodeid Votes Qdevice Name
0x00000001 1 A,V,NMW 10.195.34.100
0x00000002 1 A,V,NMW 10.195.34.101
0x00000003 1 NR 10.195.34.111
0x00000004 1 NR 10.195.34.112 (local)
0x00000000 0 Qdevice (votes 1


Services status;
---->. systemctl status pve-ha-crm pve-ha-lrm watchdog-mux systemctl status corosync-qnetd.service
● pve-ha-crm.service - PVE Cluster HA Resource Manager Daemon
Loaded: loaded (/lib/systemd/system/pve-ha-crm.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2023-03-12 03:51:07 +03; 17min ago
Process: 554147 ExecStart=/usr/sbin/pve-ha-crm start (code=exited, status=0/SUCCESS)
Main PID: 554148 (pve-ha-crm)
Tasks: 1 (limit: 629145)
Memory: 93.2M
CPU: 1.066s
CGroup: /system.slice/pve-ha-crm.service
└─554148 pve-ha-crm

Mar 12 03:51:06 host02 systemd[1]: Starting PVE Cluster HA Resource Manager Daemon...
Mar 12 03:51:07 host02 pve-ha-crm[554148]: starting server
Mar 12 03:51:07 host02 pve-ha-crm[554148]: status change startup => wait_for_quorum
Mar 12 03:51:07 host02 systemd[1]: Started PVE Cluster HA Resource Manager Daemon.
Mar 12 03:51:12 host02 pve-ha-crm[554148]: status change wait_for_quorum => slave


● pve-ha-lrm.service - PVE Local HA Resource Manager Daemon
Loaded: loaded (/lib/systemd/system/pve-ha-lrm.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2023-03-12 03:55:11 +03; 13min ago
Process: 596624 ExecStart=/usr/sbin/pve-ha-lrm start (code=exited, status=0/SUCCESS)
Main PID: 596633 (pve-ha-lrm)
Tasks: 1 (limit: 629145)
Memory: 92.9M
CPU: 1.758s
CGroup: /system.slice/pve-ha-lrm.service
└─596633 pve-ha-lrm

Mar 12 03:55:11 host02 systemd[1]: Starting PVE Local HA Resource Manager Daemon...
Mar 12 03:55:11 host02 pve-ha-lrm[596633]: starting server
Mar 12 03:55:11 host02 pve-ha-lrm[596633]: status change startup => wait_for_agent_lock
Mar 12 03:55:11 host02 systemd[1]: Started PVE Local HA Resource Manager Daemon.
Mar 12 03:55:21 host02 pve-ha-lrm[596633]: successfully acquired lock 'ha_agent_host02_lock'
Mar 12 03:55:21 host02 pve-ha-lrm[596633]: watchdog active
Mar 12 03:55:21 host02 pve-ha-lrm[596633]: status change wait_for_agent_lock => active


● watchdog-mux.service - Proxmox VE watchdog multiplexer
Loaded: loaded (/lib/systemd/system/watchdog-mux.service; static)
Active: active (running) since Sun 2023-03-12 00:37:40 +03; 3h 30min ago
Main PID: 2884 (watchdog-mux)
Tasks: 1 (limit: 629145)
Memory: 196.0K
CPU: 80ms
CGroup: /system.slice/watchdog-mux.service
└─2884 /usr/sbin/watchdog-mux

Mar 12 00:37:40 host02 systemd[1]: Started Proxmox VE watchdog multiplexer.
Mar 12 00:37:40 host02 watchdog-mux[2884]: Watchdog driver 'Software Watchdog', version 0

● corosync-qnetd.service - Corosync Qdevice Network daemon
Loaded: loaded (/lib/systemd/system/corosync-qnetd.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2023-03-12 03:09:27 +03; 1h 7min ago
Docs: man:corosync-qnetd
Main PID: 70068 (corosync-qnetd)
Tasks: 1 (limit: 629145)
Memory: 6.0M
CPU: 9ms
CGroup: /system.slice/corosync-qnetd.service
└─70068 /usr/bin/corosync-qnetd -f

Mar 12 03:09:27 host02 systemd[1]: Starting Corosync Qdevice Network daemon...
Mar 12 03:09:27 host02 systemd[1]: Started Corosync Qdevice Network daemon.
 
i dont see anything wrong. the status output looks on my 4 node cluster the same. just the last command corosync-qnetd is unkknown on my proxmox but it looks ok as well (on your output):
Active: active (running) since Sun 2023-03-12 03:09:27 +03; 1h 7min ago - that here
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!