Strange RRD error

da-alb

Member
Jan 18, 2021
121
3
23
Hi,

I continuosly have this error on this host:

Code:
Dec 28 11:21:56 pm-81 rrdcached[4513]: handle_request_update: Could not read RRD file.
Dec 28 11:21:56 pm-81 pmxcfs[4527]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/85235: -1
Dec 28 11:21:56 pm-81 pmxcfs[4527]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/85235: mmaping file '/var/lib/rrdcached/db/pve2-vm/85235': Invalid argument
Dec 28 11:21:56 pm-81 rrdcached[4513]: handle_request_update: Could not read RRD file.
Dec 28 11:21:56 pm-81 pmxcfs[4527]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/85237: -1
Dec 28 11:21:56 pm-81 pmxcfs[4527]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/85237: mmaping file '/var/lib/rrdcached/db/pve2-vm/85237': Invalid argument
Dec 28 11:22:07 pm-81 rrdcached[4513]: handle_request_update: Could not read RRD file.
Dec 28 11:22:07 pm-81 pmxcfs[4527]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/85237: -1
Dec 28 11:22:07 pm-81 pmxcfs[4527]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/85237: mmaping file '/var/lib/rrdcached/db/pve2-vm/85237': Invalid argument
Dec 28 11:22:07 pm-81 rrdcached[4513]: handle_request_update: Could not read RRD file.
Dec 28 11:22:07 pm-81 pmxcfs[4527]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/85235: -1
Dec 28 11:22:07 pm-81 pmxcfs[4527]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/85235: mmaping file '/var/lib/rrdcached/db/pve2-vm/85235': Invalid argument
Dec 28 11:22:16 pm-81 rrdcached[4513]: handle_request_update: Could not read RRD file.
Dec 28 11:22:16 pm-81 pmxcfs[4527]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/85235: -1
Dec 28 11:22:16 pm-81 pmxcfs[4527]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/85235: mmaping file '/var/lib/rrdcached/db/pve2-vm/85235': Invalid argument
Dec 28 11:22:16 pm-81 rrdcached[4513]: handle_request_update: Could not read RRD file.
Dec 28 11:22:16 pm-81 pmxcfs[4527]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/85237: -1
Dec 28 11:22:16 pm-81 pmxcfs[4527]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/85237: mmaping file '/var/lib/rrdcached/db/pve2-vm/85237': Invalid argument
Dec 28 11:22:26 pm-81 rrdcached[4513]: handle_request_update: Could not read RRD file.
Dec 28 11:22:26 pm-81 pmxcfs[4527]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/85235: -1
Dec 28 11:22:26 pm-81 pmxcfs[4527]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/85235: mmaping file '/var/lib/rrdcached/db/pve2-vm/85235': Invalid argument
Dec 28 11:22:26 pm-81 rrdcached[4513]: handle_request_update: Could not read RRD file.
Dec 28 11:22:26 pm-81 pmxcfs[4527]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/85237: -1
Dec 28 11:22:26 pm-81 pmxcfs[4527]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/85237: mmaping file '/var/lib/rrdcached/db/pve2-vm/85237': Invalid argument

Previously i had a ceph pool on this one with 2 other nodes. Now I don't and the cts reported are running on another host. What should i do?

Thanks.
 
Please provide the output of pveversion -v, pvecm status and the corosync config (cat /etc/pve/corosync.conf).
 
Please provide the output of pveversion -v, pvecm status and the corosync config (cat /etc/pve/corosync.conf).
Code:
root@pm-81:~# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.124-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-4
pve-kernel-helper: 6.4-4
pve-kernel-5.4.124-1-pve: 5.4.124-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph: 15.2.13-pve1~bpo10
ceph-fuse: 15.2.13-pve1~bpo10
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve4~bpo10
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.12-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.2-4
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1


root@pm-81:~# pvecm status
Cluster information
-------------------
Name:             mi-01
Config Version:   3
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Wed Dec 29 10:51:18 2021
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000002
Ring ID:          1.5b2
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 hidden.10.80
0x00000002          1 hidden.10.81 (local)
0x00000003          1 hidden.10.82

root@pm-81:~# cat /etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: pm-80
    nodeid: 1
    quorum_votes: 1
    ring0_addr: hidden.10.80
  }
  node {
    name: pm-81
    nodeid: 2
    quorum_votes: 1
    ring0_addr: hidden.10.81
  }
  node {
    name: pm-82
    nodeid: 3
    quorum_votes: 1
    ring0_addr: hidden.10.82
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: mi-01
  config_version: 3
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  link_mode: passive
  secauth: on
  version: 2
}
 
Last edited:
So these IDs are containers that run on other nodes of the same cluster?

Please provide the output of ls /etc/pve/lxc and ls /etc/pve/qemu-server from the node with this issue.
 
So these IDs are containers that run on other nodes of the same cluster?

Please provide the output of ls /etc/pve/lxc and ls /etc/pve/qemu-server from the node with this issue.

Code:
root@pm-81:~# ls /etc/pve/lxc
85002.conf  85015.conf  85028.conf  85041.conf  85053.conf  85066.conf  85076.conf  85089.conf  85109.conf  85120.conf  85130.conf  85142.conf  85153.conf  85382.conf  85397.conf  85435.conf
85003.conf  85017.conf  85029.conf  85042.conf  85054.conf  85067.conf  85077.conf  85093.conf  85110.conf  85121.conf  85131.conf  85143.conf  85215.conf  85384.conf  85399.conf  85436.conf
85004.conf  85018.conf  85030.conf  85043.conf  85055.conf  85068.conf  85078.conf  85098.conf  85111.conf  85122.conf  85132.conf  85145.conf  85238.conf  85385.conf  85402.conf  85437.conf
85005.conf  85019.conf  85031.conf  85044.conf  85056.conf  85069.conf  85079.conf  85100.conf  85112.conf  85123.conf  85133.conf  85146.conf  85306.conf  85386.conf  85403.conf  85438.conf
85009.conf  85020.conf  85034.conf  85045.conf  85058.conf  85070.conf  85080.conf  85101.conf  85113.conf  85124.conf  85134.conf  85147.conf  85315.conf  85387.conf  85405.conf  85439.conf
85010.conf  85021.conf  85035.conf  85047.conf  85059.conf  85071.conf  85081.conf  85102.conf  85114.conf  85125.conf  85135.conf  85148.conf  85340.conf  85390.conf  85412.conf  85440.conf
85011.conf  85022.conf  85036.conf  85049.conf  85060.conf  85072.conf  85083.conf  85103.conf  85115.conf  85126.conf  85136.conf  85149.conf  85377.conf  85393.conf  85431.conf
85012.conf  85024.conf  85038.conf  85050.conf  85062.conf  85073.conf  85084.conf  85104.conf  85116.conf  85127.conf  85138.conf  85150.conf  85379.conf  85394.conf  85432.conf
85013.conf  85026.conf  85039.conf  85051.conf  85064.conf  85074.conf  85085.conf  85105.conf  85118.conf  85128.conf  85140.conf  85151.conf  85380.conf  85395.conf  85433.conf
85014.conf  85027.conf  85040.conf  85052.conf  85065.conf  85075.conf  85087.conf  85106.conf  85119.conf  85129.conf  85141.conf  85152.conf  85381.conf  85396.conf  85434.conf
root@pm-81:~# ls /etc/pve/qemu-server
1002.conf  1005.conf  1017.conf  6040.conf  85208.conf

As you can see, no they are running on another node.

Thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!