syslog getting spammed with "notice: RRD update error"s

Code:
Sep  1 08:58:18 vm-gravity-2 pmxcfs[1514236]: [status] notice: ----
Sep  1 08:58:18 vm-gravity02 pmxcfs[1514236]: [status] notice: VMID: 130
Sep  1 08:58:18 vm-gravity-2 pmxcfs[1514236]: [status] notice: filename: /var/lib/rrdcached/db/pve-vm-9.0/130
Sep  1 08:58:18 vm-gravity-2 pmxcfs[1514236]: [status] notice: filename_pve2: /var/lib/rrdcached/db/pve2-vm/130
Sep  1 08:58:18 vm-gravity-2 pmxcfs[1514236]: [status] notice: use_pve2_file: 0
Sep  1 08:58:18 vm-gravity-2 pmxcfs[1514236]: [status] notice: haven't found pve-vm-9.0/130 but old pve2-vm/130
Sep  1 08:58:18 vm-gravity-2 pmxcfs[1514236]: [status] notice: key: pve-vm-9.0/130
Sep  1 08:58:18 vm-gravity-2 pmxcfs[1514236]: [status] notice: data: 1756731498:16:0.0305776285802099:137436856320:13468389376:34359738368:0:97658207268:1189251578126:73600066499752:301862891520
Sep  1 08:58:18 vm-gravity-2 pmxcfs[1514236]: [status] notice: keep_columns: 11
Sep  1 08:58:18 vm-gravity-2 pmxcfs[1514236]: [status] notice: padding: 0
 
Thanks! This is getting more confusing the more infos I get.

The `ls` command earlier was also done on the same node (vm-gravity-2) right?

Would it be possible to give us remote SSH access so we can try to debug this directly on that host? This would mean giving direct access to port 22 on that host from the IP addresses of our offices.

We would need to install a few more packages (gdb, debug build of pve-cluster) and during the debugging, the /etc/pve directory will not work as expected, since we would halt the code to inspect the state.

Therefore, nothing important should be running on that node.

If that would be an option for you, please contact us at our enterprise support portal https://my.proxmox.com and mention that Aaron sent you. Then we can discuss the details there on how to connect, as that is not something I want to discuss in a public forum :-)
 
Unfortunately this cluster does not have direct inbound internet access. However, I believe I was able to solve the issue.


notice: filename: /var/lib/rrdcached/db/pve-vm-9.0/130
notice: filename_pve2: /var/lib/rrdcached/db/pve2-vm/130
notice: use_pve2_file: 0
notice: haven't found pve-vm-9.0/130 but old pve2-vm/130
notice: key: pve-vm-9.0/130

On several nodes in the cluster, it appears there was both a pve-vm-9.0 and a pve2-vm directory, each containing some files.

I ran this script on each node in the cluster:

cd /var/lib/rrdcached/db
mkdir /var/lib/rrdcached.bak
systemctl stop pvestatd
mv * /var/lib/rrdcached.bak
systemctl start pvestatd

Once I executed that script on all the nodes, the error(s) disappeared.
 
Okay. Just to be sure, the ls output which reported that the pve2-vm directory does not exist, was run on the same node? vm-gravity-2?