Backup task is hung and unresponsive node

Oct 7, 2019
793
325
108
Spain
Hello,

I have a 3 node cluster running v7.1-1 (running kernel: 5.13.19-3-pve). Server 03 has a running backup task that:

- Cant be stopped from the webUI (button is greyed out).
- Cant be killed, not even with "-9"
- When trying to view the log nothing is shown and after a few seconds, an "invalid ticket" is shown and I'm requested to log in again.

Source are all VMs and CTs of this server (some in Ceph pool, a few are in local disks). Destination is a PBS server which is working correctly (at least there's network conectivity from all 3 nodes and the other two nodes have created their backups correctly tonight).

Also:

- I can't access /etc/pve/nodes/server03 from any node.
- I can access /etc/pve/* on any node.
- Any qm command run on server 03 just hungs. Not even closing the shell that launched it terminates the process and I have to use kill -9 to stop if from other terminal. That means I can't migrate VM's from 03 to the other two.
- VMs on every server are running correctly.
- In the webUI, only the node I connect to shows with the green tick, the other have a greyed out question mark. That is, 01 sees just 01 with gree tick, the other 2 have a question mark. Node 02 just sees 02 as okey, same for 03: just sees itself as ok.

Can't find any specific error on logs which may give me a clue on what happend.

Is there any way to stop the backup task and recover the node without restarting? There are a few vital VMs there and I would like to preserve their uptime as much as possible.

Thanks!
 
Can't find any specific error on logs which may give me a clue on what happend.
the syslog/journal is the best bet to see what exactly hangs. but if it is the cluster filesystem, the it seems that the cluster network is not reliable?

Is there any way to stop the backup task and recover the node without restarting? There are a few vital VMs there and I would like to preserve their uptime as much as possible.
this depends on what exactly hangs, you can check with 'ps faxl', this will show the 'D' state for hung processes
 
During the weekend I managed to afford some downtime (just in case something had to be forcibily rebooted) and went to diagnose this cluster. Simply restarting pve-cluster service on server 03 made everything work again. Everything shown a green mark, the backup task got killed by my previous "kill" attempts and I could live migrate VMs from 03 to the other two nodes.

Then I rebooted server 03. Took back a few not-so-critical VMs and started a backup: everything ran perfectly.

After that I did check pve-cluster journal carefully and saw messages like this:

Code:
Mar 21 00:09:35 SERVER03 pmxcfs[3253]: [dcdb] notice: data verification successful
Mar 21 01:09:35 SERVER03 pmxcfs[3253]: [dcdb] notice: data verification successful
Mar 21 01:32:31 SERVER03 pmxcfs[3253]: [dcdb] notice: members: 3/3253
Mar 21 01:32:31 SERVER03 pmxcfs[3253]: [dcdb] crit: received write while not quorate - trigger resync
Mar 21 01:32:31 SERVER03 pmxcfs[3253]: [dcdb] crit: leaving CPG group
Mar 21 01:32:32 SERVER03 pmxcfs[3253]: [dcdb] notice: start cluster connection
Mar 21 01:32:32 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_join failed: 14
Mar 21 01:32:32 SERVER03 pmxcfs[3253]: [dcdb] crit: can't initialize service
Mar 21 01:32:41 SERVER03 pmxcfs[3253]: [dcdb] notice: members: 3/3253
Mar 21 01:32:41 SERVER03 pmxcfs[3253]: [dcdb] notice: all data is up to date
Mar 21 01:32:44 SERVER03 pmxcfs[3253]: [dcdb] notice: members: 1/3736, 2/3485, 3/3253
Mar 21 01:32:44 SERVER03 pmxcfs[3253]: [dcdb] notice: starting data syncronisation
Mar 21 01:32:44 SERVER03 pmxcfs[3253]: [dcdb] notice: received sync request (epoch 1/3736/00000007)
Mar 21 01:32:44 SERVER03 pmxcfs[3253]: [dcdb] notice: received all states
Mar 21 01:32:44 SERVER03 pmxcfs[3253]: [dcdb] notice: leader is 1/3736
Mar 21 01:32:44 SERVER03 pmxcfs[3253]: [dcdb] notice: synced members: 1/3736, 2/3485
Mar 21 01:32:44 SERVER03 pmxcfs[3253]: [dcdb] notice: waiting for updates from leader
Mar 21 01:32:44 SERVER03 pmxcfs[3253]: [dcdb] notice: update complete - trying to commit (got 3 inode updates)
Mar 21 01:32:44 SERVER03 pmxcfs[3253]: [dcdb] notice: all data is up to date
Mar 21 01:56:09 SERVER03 pmxcfs[3253]: [dcdb] notice: members: 3/3253
Mar 21 01:56:09 SERVER03 pmxcfs[3253]: [dcdb] crit: received write while not quorate - trigger resync
Mar 21 01:56:09 SERVER03 pmxcfs[3253]: [dcdb] crit: leaving CPG group
Mar 21 01:56:09 SERVER03 pmxcfs[3253]: [dcdb] notice: start cluster connection
Mar 21 01:56:09 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_join failed: 14
Mar 21 01:56:09 SERVER03 pmxcfs[3253]: [dcdb] crit: can't initialize service
Mar 21 01:56:12 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_send_message failed: 9
Mar 21 01:56:12 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_send_message failed: 9
Mar 21 01:56:12 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_send_message failed: 9
Mar 21 01:56:12 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_send_message failed: 9
Mar 21 01:56:13 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_send_message failed: 9
Mar 21 01:56:13 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_send_message failed: 9
Mar 21 01:56:13 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_send_message failed: 9
Mar 21 01:56:13 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_send_message failed: 9
Mar 21 01:56:14 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_send_message failed: 9
Mar 21 01:56:14 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_send_message failed: 9
Mar 21 01:56:14 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_send_message failed: 9
Mar 21 01:56:14 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_send_message failed: 9
Mar 21 01:56:14 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_send_message failed: 9
Mar 21 01:56:14 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_send_message failed: 9
Mar 21 01:56:14 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_send_message failed: 9
Mar 21 01:56:14 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_send_message failed: 9
Mar 21 01:56:14 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_send_message failed: 9
Mar 21 01:56:14 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_send_message failed: 9
Mar 21 01:56:15 SERVER03 pmxcfs[3253]: [dcdb] notice: members: 1/3736, 2/3485, 3/3253
Mar 21 01:56:15 SERVER03 pmxcfs[3253]: [dcdb] notice: starting data syncronisation
Mar 21 01:56:15 SERVER03 pmxcfs[3253]: [dcdb] notice: received sync request (epoch 1/3736/00000009)
Mar 21 01:56:15 SERVER03 pmxcfs[3253]: [dcdb] notice: received all states
Mar 21 01:56:15 SERVER03 pmxcfs[3253]: [dcdb] notice: leader is 1/3736
Mar 21 01:56:15 SERVER03 pmxcfs[3253]: [dcdb] notice: synced members: 1/3736, 2/3485
Mar 21 01:56:15 SERVER03 pmxcfs[3253]: [dcdb] notice: waiting for updates from leader
Mar 21 01:56:15 SERVER03 pmxcfs[3253]: [dcdb] notice: update complete - trying to commit (got 3 inode updates)
Mar 21 01:56:15 SERVER03 pmxcfs[3253]: [dcdb] notice: all data is up to date
Mar 21 02:09:35 SERVER03 pmxcfs[3253]: [dcdb] notice: data verification successful


Clearly the problem started more than a week before that backup task "hung" and made the issue much more noticeable. On the other nodes I can see similar events but no "crit" ones. This is for server 01 at the same time (server 02 is similar, but shorter):

Code:
Mar 21 00:09:35 SERVER01 pmxcfs[3736]: [dcdb] notice: data verification successful
Mar 21 01:09:35 SERVER01 pmxcfs[3736]: [dcdb] notice: data verification successful
Mar 21 01:32:31 SERVER01 pmxcfs[3736]: [dcdb] notice: members: 1/3736, 2/3485
Mar 21 01:32:31 SERVER01 pmxcfs[3736]: [dcdb] notice: starting data syncronisation
Mar 21 01:32:31 SERVER01 pmxcfs[3736]: [dcdb] notice: cpg_send_message retried 1 times
Mar 21 01:32:31 SERVER01 pmxcfs[3736]: [dcdb] notice: received sync request (epoch 1/3736/00000006)
Mar 21 01:32:31 SERVER01 pmxcfs[3736]: [dcdb] notice: received all states
Mar 21 01:32:31 SERVER01 pmxcfs[3736]: [dcdb] notice: leader is 1/3736
Mar 21 01:32:31 SERVER01 pmxcfs[3736]: [dcdb] notice: synced members: 1/3736, 2/3485
Mar 21 01:32:31 SERVER01 pmxcfs[3736]: [dcdb] notice: start sending inode updates
Mar 21 01:32:31 SERVER01 pmxcfs[3736]: [dcdb] notice: sent all (0) updates
Mar 21 01:32:31 SERVER01 pmxcfs[3736]: [dcdb] notice: all data is up to date
Mar 21 01:32:31 SERVER01 pmxcfs[3736]: [dcdb] notice: dfsm_deliver_queue: queue length 2
Mar 21 01:32:44 SERVER01 pmxcfs[3736]: [dcdb] notice: members: 1/3736, 2/3485, 3/3253
Mar 21 01:32:44 SERVER01 pmxcfs[3736]: [dcdb] notice: starting data syncronisation
Mar 21 01:32:44 SERVER01 pmxcfs[3736]: [dcdb] notice: cpg_send_message retried 1 times
Mar 21 01:32:44 SERVER01 pmxcfs[3736]: [dcdb] notice: received sync request (epoch 1/3736/00000007)
Mar 21 01:32:44 SERVER01 pmxcfs[3736]: [dcdb] notice: received all states
Mar 21 01:32:44 SERVER01 pmxcfs[3736]: [dcdb] notice: leader is 1/3736
Mar 21 01:32:44 SERVER01 pmxcfs[3736]: [dcdb] notice: synced members: 1/3736, 2/3485
Mar 21 01:32:44 SERVER01 pmxcfs[3736]: [dcdb] notice: start sending inode updates
Mar 21 01:32:44 SERVER01 pmxcfs[3736]: [dcdb] notice: sent all (3) updates
Mar 21 01:32:44 SERVER01 pmxcfs[3736]: [dcdb] notice: all data is up to date
Mar 21 01:56:09 SERVER01 pmxcfs[3736]: [dcdb] notice: members: 1/3736, 2/3485
Mar 21 01:56:09 SERVER01 pmxcfs[3736]: [dcdb] notice: starting data syncronisation
Mar 21 01:56:09 SERVER01 pmxcfs[3736]: [dcdb] notice: cpg_send_message retried 1 times
Mar 21 01:56:09 SERVER01 pmxcfs[3736]: [dcdb] notice: received sync request (epoch 1/3736/00000008)
Mar 21 01:56:09 SERVER01 pmxcfs[3736]: [dcdb] notice: received all states
Mar 21 01:56:09 SERVER01 pmxcfs[3736]: [dcdb] notice: leader is 1/3736
Mar 21 01:56:09 SERVER01 pmxcfs[3736]: [dcdb] notice: synced members: 1/3736, 2/3485
Mar 21 01:56:09 SERVER01 pmxcfs[3736]: [dcdb] notice: start sending inode updates
Mar 21 01:56:09 SERVER01 pmxcfs[3736]: [dcdb] notice: sent all (0) updates
Mar 21 01:56:09 SERVER01 pmxcfs[3736]: [dcdb] notice: all data is up to date
Mar 21 01:56:09 SERVER01 pmxcfs[3736]: [dcdb] notice: dfsm_deliver_queue: queue length 6
Mar 21 01:56:15 SERVER01 pmxcfs[3736]: [dcdb] notice: members: 1/3736, 2/3485, 3/3253
Mar 21 01:56:15 SERVER01 pmxcfs[3736]: [dcdb] notice: starting data syncronisation
Mar 21 01:56:15 SERVER01 pmxcfs[3736]: [dcdb] notice: received sync request (epoch 1/3736/00000009)
Mar 21 01:56:15 SERVER01 pmxcfs[3736]: [dcdb] notice: received all states
Mar 21 01:56:15 SERVER01 pmxcfs[3736]: [dcdb] notice: leader is 1/3736
Mar 21 01:56:15 SERVER01 pmxcfs[3736]: [dcdb] notice: synced members: 1/3736, 2/3485
Mar 21 01:56:15 SERVER01 pmxcfs[3736]: [dcdb] notice: start sending inode updates
Mar 21 01:56:15 SERVER01 pmxcfs[3736]: [dcdb] notice: sent all (3) updates
Mar 21 01:56:15 SERVER01 pmxcfs[3736]: [dcdb] notice: all data is up to date
Mar 21 02:09:35 SERVER01 pmxcfs[3736]: [dcdb] notice: data verification successful

Seems as if server 03 couldn't reach the other nodes and got himself out of quorum for a while. I have not detected network connectivity issues.

What does those messages really mean?

Code:
Mar 21 01:32:31 SERVER03 pmxcfs[3253]: [dcdb] crit: received write while not quorate - trigger resync
Mar 21 01:32:31 SERVER03 pmxcfs[3253]: [dcdb] crit: leaving CPG group
Mar 21 01:32:32 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_join failed: 14
Mar 21 01:32:32 SERVER03 pmxcfs[3253]: [dcdb] crit: can't initialize service
Mar 21 01:56:09 SERVER03 pmxcfs[3253]: [dcdb] crit: received write while not quorate - trigger resync
Mar 21 01:56:09 SERVER03 pmxcfs[3253]: [dcdb] crit: leaving CPG group
Mar 21 01:56:09 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_join failed: 14
Mar 21 01:56:09 SERVER03 pmxcfs[3253]: [dcdb] crit: can't initialize service
Mar 21 01:56:12 SERVER03 pmxcfs[3253]: [dcdb] crit: cpg_send_message failed: 9

Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!