[SOLVED] corosync errors

Aug 7, 2017
45
3
48
54
Hi, we just upgraded our three node cluster from 6.4 to 7.1. This went fine as an In-place upgrade.

On this occasion, we took a closer look at the log files and found these three occurrences in a row. We also looked into old log files, it was there too. So it has nothing to do with the upgrade.

Does anyone have an idea what this could be? Is this critical? Theoretically, everything works as usual.

The log entries are on all three nodes.


Code:
Feb 21 10:01:07 node3 corosync[4416]:   [KNET  ] link: host: 1 link: 0 is down
Feb 21 10:01:07 node3 corosync[4416]:   [KNET  ] host: host: 1 (passive) best link: 0 (pri: 1)
Feb 21 10:01:07 node3 corosync[4416]:   [KNET  ] host: host: 1 has no active links
Feb 21 10:01:09 node3 corosync[4416]:   [TOTEM ] Token has not been received in 2737 ms
Feb 21 10:01:18 node3 corosync[4416]:   [QUORUM] Sync members[1]: 3
Feb 21 10:01:18 node3 corosync[4416]:   [QUORUM] Sync left[2]: 1 2
Feb 21 10:01:18 node3 corosync[4416]:   [TOTEM ] A new membership (3.51f1) was formed. Members left: 1 2
Feb 21 10:01:18 node3 corosync[4416]:   [TOTEM ] Failed to receive the leave message. failed: 1 2
Feb 21 10:01:18 node3 pmxcfs[1790718]: [dcdb] notice: members: 3/1790718
Feb 21 10:01:18 node3 pmxcfs[1790718]: [status] notice: members: 3/1790718
Feb 21 10:01:18 node3 corosync[4416]:   [QUORUM] Members[1]: 3
Feb 21 10:01:18 node3 corosync[4416]:   [MAIN  ] Completed service synchronization, ready to provide service.
Feb 21 10:01:23 node3 corosync[4416]:   [QUORUM] Sync members[1]: 3
Feb 21 10:01:23 node3 corosync[4416]:   [TOTEM ] A new membership (3.51f5) was formed. Members
Feb 21 10:01:23 node3 corosync[4416]:   [QUORUM] Members[1]: 3
Feb 21 10:01:23 node3 corosync[4416]:   [MAIN  ] Completed service synchronization, ready to provide service.
Feb 21 10:01:25 node3 corosync[4416]:   [KNET  ] rx: host: 1 link: 0 is up
Feb 21 10:01:25 node3 corosync[4416]:   [KNET  ] host: host: 1 (passive) best link: 0 (pri: 1)
Feb 21 10:01:25 node3 corosync[4416]:   [QUORUM] Sync members[3]: 1 2 3
Feb 21 10:01:25 node3 corosync[4416]:   [QUORUM] Sync joined[2]: 1 2
Feb 21 10:01:25 node3 corosync[4416]:   [TOTEM ] A new membership (1.51f9) was formed. Members joined: 1 2

Code:
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-node/atlas: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/107: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/999: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/160: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/112: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/199: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/110: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/111: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/998: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/102: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/217: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/114: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/150: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/131: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/109: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/130: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/218: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/201: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/997: -1

Code:
Feb 21 10:14:22 node3 pmxcfs[1790718]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/node1/backup_pve_vm: -1
Feb 21 10:14:22 node3 pmxcfs[1790718]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/node1/vol_proxmox_images: -1
Feb 21 10:14:22 node3 pmxcfs[1790718]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/node1/zfs_cluster_data: -1
Feb 21 10:14:22 node3 pmxcfs[1790718]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/node1/thinpool-vmdata_ssd: -1
Feb 21 10:14:22 node3 pmxcfs[1790718]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/node1/local: -1
Feb 21 10:14:22 node3 pmxcfs[1790718]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/node2/vol_proxmox_vmdata: -1
 
Last edited:
hi,

we just upgraded our three node cluster from 6.4 to 7.1. This went fine as an In-place upgrade.
okay.

* are all the nodes upgraded to the same package levels?

Code:
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/102: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/217: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/114: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/150: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/131: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/109: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/130: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/218: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/201: -1
Feb 17 23:01:37 node2 pmxcfs[2437]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/997: -1
...
Feb 21 10:14:22 node3 pmxcfs[1790718]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/node1/backup_pve_vm: -1
Feb 21 10:14:22 node3 pmxcfs[1790718]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/node1/vol_proxmox_images: -1
Feb 21 10:14:22 node3 pmxcfs[1790718]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/node1/zfs_cluster_data: -1
Feb 21 10:14:22 node3 pmxcfs[1790718]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/node1/thinpool-vmdata_ssd: -1
Feb 21 10:14:22 node3 pmxcfs[1790718]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/node1/local: -1
Feb 21 10:14:22 node3 pmxcfs[1790718]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/node2/vol_proxmox_vmdata: -1
* have you taken a look at the workaround in this thread [0] with those errors?

you can make a backup of the rrdcached folder on your PVE, then restart the relevant services and the issue should go away:
Code:
systemctl stop rrdcached
mv /var/lib/rrdcached /var/lib/rrdcached.bak
systemctl start rrdcached
systemctl restart pve-cluster

[0]: https://forum.proxmox.com/threads/rrdc-and-rrd-update-errors.76219/#post-351660
 
Thank you, we did this several times :-)

It turned out that it was a side effect in our 10G Unifi switch where the IP address of the MAC was doubled in the ARP table. Removing the redundant entry solved every problem. On earth. Things are really nice now :-)

Thank you for taking care!
 
  • Like
Reactions: oguz

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!