[SOLVED] LXC backup failed "ERROR: rsync: [sender] readlink_stat....failed: Bad message (74)

Alex-Brazil

New Member
Sep 10, 2024
5
0
1
Germany
Hello & greetings.

as a "newbee" in Unix / Linux system and also Proxmox I'm looking for some help please...

I have HomeAssistant (VM), EMQX and Zigbee2MQTT as LXC still running fine on my Proxmox - without any issues or problems.

Since a couple of days I receive a error: "bad message (74) about my automated backup to my QNAP NAS of only the Zigbee2MQTT LXC (complete log file attached).
The HA VM and EMQX LXC backups are without error.

I would be very pleased if anybody can give me a hint and helpful hand ;) how to fix this issue.

Kind regards, Alex

Code:
INFO: starting new backup job: vzdump 9110 --notes-template '{{guestname}}' --remove 0 --storage QNAP-T451-Backups_Proxmox --notification-mode auto --mode snapshot --node pve --compress zstd
INFO: Starting Backup of VM 9110 (lxc)
INFO: Backup started at 2025-05-11 17:03:42
INFO: status = running
INFO: CT Name: lxc-zigbee2mqtt-slzb-06
INFO: including mount point rootfs ('/') in backup
INFO: mode failure - some volumes do not support snapshots
INFO: trying 'suspend' mode instead
INFO: backup mode: suspend
INFO: ionice priority: 7
INFO: CT Name: lxc-zigbee2mqtt-slzb-06
INFO: including mount point rootfs ('/') in backup
INFO: starting first sync /proc/1795223/root/ to /tmp/vzdumptmp2362770_9110/
ERROR: rsync: [sender] readlink_stat("/proc/1795223/root/opt/zigbee2mqtt/node_modules/.pnpm/@ampproject+remapping@2.3.0/node_modules/@ampproject/remapping/package.json") failed: Bad message (74)
ERROR: rsync: [sender] readlink_stat("/proc/1795223/root/opt/zigbee2mqtt/node_modules/.pnpm/@ampproject+remapping@2.3.0/node_modules/@ampproject/remapping/dist/types/build-source-map-tree.d.ts") failed: Bad message (74)
.
.
.
.
ERROR: rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1338) [sender=3.2.7]
ERROR: Backup of VM 9110 failed - command 'rsync --stats -h -X -A --numeric-ids -aH --delete --no-whole-file --sparse --one-file-system --relative '--exclude=/tmp/?*' '--exclude=/var/tmp/?*' '--exclude=/var/run/?*.pid' /proc/1795223/root//./ /tmp/vzdumptmp2362770_9110/' failed: exit code 23
INFO: Failed at 2025-05-11 17:04:19
INFO: Backup job finished with errors
TASK ERROR: job errors
 

Attachments

Hi,
while the container is off, you could run pct fsck 9110 to check the file system.

If that does not help please post the output of stat /opt/zigbee2mqtt/node_modules/.pnpm/@ampproject+remapping@2.3.0/node_modules/@ampproject/remapping/package.json in the container and the container configuration pct config 9110 as well as the output of pveversion -v.
 
Thank you very much for your advice Fiona.
... well i made a shut down of the container and executed "pct fsck 9110" in the PVE Shell:

Code:
root@pve:~# pct fsck 9110

fsck from util-linux 2.38.1

/mnt/pve/QNAP-T451-Backups_Proxmox/images/9110/vm-9110-disk-0.raw contains a file system with errors, check forced.

/mnt/pve/QNAP-T451-Backups_Proxmox/images/9110/vm-9110-disk-0.raw: Inode 5073 seems to contain garbage.


/mnt/pve/QNAP-T451-Backups_Proxmox/images/9110/vm-9110-disk-0.raw: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.

        (i.e., without -a or -p options)

command 'fsck -a -l /mnt/pve/QNAP-T451-Backups_Proxmox/images/9110/vm-9110-disk-0.raw' failed: exit code 4

root@pve:~#


In the Shell of the Zigbee2MQTT LXC i executed the "stat":

Code:
root@lxc-zigbee2mqtt-slzb-06:~# stat /opt/zigbee2mqtt/node_modules/.pnpm/@ampproject+remapping@2.3.0/node_modules/@ampproject/remapping/package.json
stat: cannot statx '/opt/zigbee2mqtt/node_modules/.pnpm/@ampproject+remapping@2.3.0/node_modules/@ampproject/remapping/package.json': Bad message
root@lxc-zigbee2mqtt-slzb-06:~# stst
-bash: stst: command not found

I did not find the stat /opt/zigbee2mqtt/node_modules/.pnpm/@ampproject+remapping@2.3.0/node_modules/@ampproject/remapping/package.json ...
How can I get this package.json file ?
1747046662365.png


The result of "pct config 9110":

Code:
root@pve:~# pct config 9110
arch: amd64
cores: 3
description: <div align='center'>%0A  <a href='https://Helper-Scripts.com' target='_blank' rel='noopener noreferrer'>%0A    <img src='https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/images/logo-81x112.png' alt='Logo' style='width%3A81px;height%3A112px;'/>%0A  </a>%0A%0A  <h2 style='font-size%3A 24px; margin%3A 20px 0;'>Zigbee2MQTT LXC</h2>%0A%0A  <p style='margin%3A 16px 0;'>%0A    <a href='https://ko-fi.com/community_scripts' target='_blank' rel='noopener noreferrer'>%0A      <img src='https://img.shields.io/badge/&#x2615;-Buy us a coffee-blue' alt='spend Coffee' />%0A    </a>%0A  </p>%0A  %0A  <span style='margin%3A 0 10px;'>%0A    <i class="fa fa-github fa-fw" style="color%3A #f5f5f5;"></i>%0A    <a href='https://github.com/community-scripts/ProxmoxVE' target='_blank' rel='noopener noreferrer' style='text-decoration%3A none; color%3A #00617f;'>GitHub</a>%0A  </span>%0A  <span style='margin%3A 0 10px;'>%0A    <i class="fa fa-comments fa-fw" style="color%3A #f5f5f5;"></i>%0A    <a href='https://github.com/community-scripts/ProxmoxVE/discussions' target='_blank' rel='noopener noreferrer' style='text-decoration%3A none; color%3A #00617f;'>Discussions</a>%0A  </span>%0A  <span style='margin%3A 0 10px;'>%0A    <i class="fa fa-exclamation-circle fa-fw" style="color%3A #f5f5f5;"></i>%0A    <a href='https://github.com/community-scripts/ProxmoxVE/issues' target='_blank' rel='noopener noreferrer' style='text-decoration%3A none; color%3A #00617f;'>Issues</a>%0A  </span>%0A</div>%0A
features: keyctl=1,nesting=1
hostname: lxc-zigbee2mqtt-slzb-06
memory: 1024
net0: name=eth0,bridge=vmbr0,gw=192.168.1.1,hwaddr=BC:24:11:D4:BD:DE,ip=192.168.9.110/16,type=veth
onboot: 1
ostype: debian
rootfs: QNAP-T451-Backups_Proxmox:9110/vm-9110-disk-0.raw,size=10G
startup: order=10
swap: 0
tags: community-script;mqtt;smarthome;zigbee
unprivileged: 1
root@pve:~#

here the result from running "pveversion -v"

Code:
root@pve:~# pveversion -v
proxmox-ve: 8.3.0 (running kernel: 6.8.12-8-pve)
pve-manager: 8.3.5 (running version: 8.3.5/dac3aa88bac3f300)
proxmox-kernel-helper: 8.1.1
proxmox-kernel-6.8: 6.8.12-8
proxmox-kernel-6.8.12-8-pve-signed: 6.8.12-8
proxmox-kernel-6.8.12-2-pve-signed: 6.8.12-2
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2+deb12u1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.6.0
libproxmox-backup-qemu0: 1.5.1
libproxmox-rs-perl: 0.3.5
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.2.0
libpve-network-perl: 0.10.1
libpve-rs-perl: 0.9.2
libpve-storage-perl: 8.3.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
proxmox-backup-client: 3.3.3-1
proxmox-backup-file-restore: 3.3.3-1
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.1
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.6
pve-cluster: 8.0.10
pve-container: 5.2.4
pve-docs: 8.3.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.14-3
pve-ha-manager: 4.0.6
pve-i18n: 3.4.0
pve-qemu-kvm: 9.2.0-2
pve-xtermjs: 5.3.0-3
qemu-server: 8.3.8
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.7-pve1
root@pve:~#
 
Last edited:
As the message says, you can try to run the command manually and it should prompt you for which action to take for the repair, again while the container is shut down:
fsck -l /mnt/pve/QNAP-T451-Backups_Proxmox/images/9110/vm-9110-disk-0.raw
 
As the message says, you can try to run the command manually and it should prompt you for which action to take for the repair, again while the container is shut down:
fsck -l /mnt/pve/QNAP-T451-Backups_Proxmox/images/9110/vm-9110-disk-0.raw
Thank you <3
Code:
178) has deleted/unused inode 5136.  Clear<y>? yes
Entry 'cleverio.d.ts.map' in /opt/zigbee2mqtt/node_modules/.pnpm/zigbee-herdsman-converters@23.36.0/node_modules/zigbee-herdsman-converters/dist/devices (395178) has deleted/unused inode 5087.  Clear<y>? yes
Entry 'frient.js.map' in /opt/zigbee2mqtt/node_modules/.pnpm/zigbee-herdsman-converters@23.36.0/node_modules/zigbee-herdsman-converters/dist/devices (395178) has deleted/unused inode 5296.  Clear<y>? yes
Entry 'cleverio.js' in /opt/zigbee2mqtt/node_modules/.pnpm/zigbee-herdsman-converters@23.36.0/node_modules/zigbee-herdsman-converters/dist/devices (395178) has deleted/unused inode 5442.  Clear<y>? yes
Entry 'shinasystem.d.ts' in /opt/zigbee2mqtt/node_modules/.pnpm/zigbee-herdsman-converters@23.36.0/node_modules/zigbee-herdsman-converters/dist/devices (395178) has deleted/unused inode 5111.  Clear<y>? yes
Entry 'weten.d.ts' in /opt/zigbee2mqtt/node_modules/.pnpm/zigbee-herdsman-converters@23.36.0/node_modules/zigbee-herdsman-converters/dist/devices (395178) has deleted/unused inode 6114.  Clear<y>? yes






Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Inode 56 ref count is 2, should be 1.  Fix<y>? yes
Inode 88 ref count is 2, should be 1.  Fix<y>? yes
Inode 98 ref count is 2, should be 1.  Fix<y>? yes
Inode 112 ref count is 2, should be 1.  Fix<y>? yes
Inode 145 ref count is 3, should be 2.  Fix<y>? yes
Inode 152 ref count is 3, should be 2.  Fix<y>? yes
Inode 164 ref count is 5, should be 4.  Fix<y>? yes
Inode 192 ref count is 3, should be 2.  Fix<y>? yes
Inode 224 ref count is 4, should be 3.  Fix<y>? yes
Inode 258 ref count is 2, should be 1.  Fix<y>? yes
Inode 325 ref count is 2, should be 1.  Fix<y>? yes
Inode 336 ref count is 2, should be 1.  Fix<y>? yes

I did so and after confirming all with "y" i started the command again with that result:

Code:
root@pve:~# fsck -l /mnt/pve/QNAP-T451-Backups_Proxmox/images/9110/vm-9110-disk-0.raw
fsck from util-linux 2.38.1
e2fsck 1.47.0 (5-Feb-2023)
/mnt/pve/QNAP-T451-Backups_Proxmox/images/9110/vm-9110-disk-0.raw: clean, 60506/655360 files, 700676/2621440 blocks
root@pve:~#


and than I started a Backup
1747048040540.png
 
Thank you very much for your help :)

only one additional question ... What was the most likely cause of this error? Should I be concerned about a hardware issue ?
 
Thank you very much for your help :)

only one additional question ... What was the most likely cause of this error? Should I be concerned about a hardware issue ?
Most likely the file system was interrupted at an inconvenient time. Was the container was hard stopped at some point? If it's a hardware issue, you should see messages about IO errors on the host for the disk where the storage is on. Still never hurts to check the SMART status of the disk (in the UI go to Datacenter > [your node] > Disks > Show SMART values).