PBS Backup Failure

plexrva

Member
May 2, 2022
9
1
8
Since 9-30-2025 I have been having an issue getting 3 containers and a VM to back up to my Proxmox Backup Server. In total I have 15 containers and 2 VMs that all backup to PBS with these being the only issue.

I shutdown the containers and ran a fsck on them to make sure everything was fine which it was. Some of the containers are on data2 and data mount points which are different nvme formatted to ext4. Several other containers are on both of those mount points and have no issue backing up to PBS.

There is plenty of space on PBS (400GB which is enough to do several entire backups of each of these containers and the VM).

I was able to get CT115 to backup by deleting the failed backup on PBS, backing CT115 up onto my proxmox sever, then changing the destination to PBS and running the backup again. That method was not successful for the other 2 containers and 1 VM. They will backup locally.

Does anyone have any idea what is going on?

Versions-
Code:
root@pve:~# pveversion
pve-manager/8.3.2/3e76eec21c4a14a7 (running kernel: 6.8.12-5-pve)

root@pbs:~#  uname -r
6.8.12-4-pve
root@pbs:~# proxmox-backup-manager version
proxmox-backup-server 3.4.7-1 running version: 3.4.7

FSCK-
Code:
root@pve:~# pct fsck 114
fsck from util-linux 2.38.1
/mnt/data2/images/114/vm-114-disk-0.raw: clean, 28357/4194304 files, 3118175/16777216 blocks
root@pve:~# pct fsck 113
fsck from util-linux 2.38.1
/mnt/data2/images/113/vm-113-disk-0.raw: clean, 3247012/19660800 files, 40007371/78643200 blocks
root@pve:~# pct fsck 115
fsck from util-linux 2.38.1
/mnt/data/images/115/vm-115-disk-0.raw: clean, 579127/7864320 files, 16278288/31457280 blocks


PBS error for one of the backups.
Code:
2025-09-16T03:08:56-04:00: successfully added chunk 5679000645adab60065e3fd8f668a55f966cd275160fc669215374de49f79505 to dynamic index 1 (offset 9153652, size 160374)
2025-09-16T03:08:56-04:00: successfully added chunk e1570d3848c4401f2cc1dec007e103d5a9af8c60508d43b054882b12d2a3dd05 to dynamic index 1 (offset 9314026, size 621438)
2025-09-16T03:08:56-04:00: successfully added chunk 9e4d0b34a2302abd2779323df6f5df5ca27bb6489392e98c2b855b3e1d92698c to dynamic index 1 (offset 9935464, size 285044)
2025-09-16T03:08:56-04:00: successfully added chunk b4ab98e6a7024666832bc9f8033dd9b49a5f419eee5b24b27e827a964236ae98 to dynamic index 1 (offset 10220508, size 184782)
2025-09-16T03:08:56-04:00: successfully added chunk 0ceeafb049aeac82cfdca6de9237c55925785cd06ef22bee80d339255419ceda to dynamic index 1 (offset 10405290, size 191977)
2025-09-16T03:08:56-04:00: successfully added chunk ee6068533449bc0c7bb2d31e8dbc0f6189d320fb4a5586d09a65871df041bfbe to dynamic index 1 (offset 10597267, size 854612)
2025-09-16T03:08:56-04:00: successfully added chunk e0503c7d5e861c036fb766cf0d7f439512687777e3e43eef7c772e2fcdebfcb2 to dynamic index 1 (offset 11451879, size 143678)
2025-09-16T03:08:56-04:00: successfully added chunk a7bb0d3747385a28e2c4e0b51915c1562563c9a981a1126e1ad17623de623bac to dynamic index 1 (offset 11595557, size 1752686)
2025-09-16T03:08:56-04:00: successfully added chunk 614c3980912b0bf96ff603159c868bc6b573e1386acfeb6b23763ad4b8ae9ec2 to dynamic index 1 (offset 13348243, size 142033)
2025-09-16T03:08:56-04:00: successfully added chunk 7c732639f64d6d21763352ec0daa8004e041c4d804471f67cb765133c2a7eddc to dynamic index 1 (offset 13490276, size 876877)
2025-09-16T03:08:56-04:00: successfully added chunk e734476ef7da5d9ec0322d5d7ff30e127f7d20472bbba8a9e2bf3e67cc3d1104 to dynamic index 1 (offset 14367153, size 1651126)
2025-09-16T03:08:56-04:00: successfully added chunk c6149e57bb15155f6b638dbfe90aaf2246a673470ab0e30d32bf525edfe72927 to dynamic index 1 (offset 16018279, size 342462)
2025-09-16T03:08:56-04:00: successfully added chunk 61e61ba7f644ef3c609972c84a1a142850ddd84c09d760e299a7fb86a749eef0 to dynamic index 1 (offset 16360741, size 73759)
2025-09-16T03:08:56-04:00: POST /dynamic_close
2025-09-16T03:08:56-04:00: Upload statistics for 'catalog.pcat1.didx'
2025-09-16T03:08:56-04:00: UUID: dd0964f5a0644de280cb8cb35c787f0b
2025-09-16T03:08:56-04:00: Checksum: 1e2388c772a4b27287eb0a1723eee5a2f61b1db0c2e4d522abe937b2964ecdb5
2025-09-16T03:08:56-04:00: Size: 16434500
2025-09-16T03:08:56-04:00: Chunk count: 34
2025-09-16T03:08:56-04:00: Upload size: 16434500 (100%)
2025-09-16T03:08:56-04:00: Duplicates: 0+9 (26%)
2025-09-16T03:08:56-04:00: Compression: 37%
2025-09-16T03:08:56-04:00: successfully closed dynamic index 1
2025-09-16T03:08:56-04:00: POST /blob
2025-09-16T03:08:56-04:00: add blob "/mnt/storage_nvme/ct/115/2025-09-16T07:00:06Z/index.json.blob" (535 bytes, comp: 535)
2025-09-16T03:08:56-04:00: POST /finish
2025-09-16T03:09:04-04:00: stat registered chunks failed - stat failed on 659c76493466bdd320ab97a4edafd4ca80cabed9d59263d0b398c4afac431642

Caused by:
    Structure needs cleaning (os error 117)
2025-09-16T03:09:04-04:00: POST /finish: 400 Bad Request: stat known chunks failed - stat failed on 659c76493466bdd320ab97a4edafd4ca80cabed9d59263d0b398c4afac431642

Caused by:
    Structure needs cleaning (os error 117)
2025-09-16T03:09:04-04:00: backup ended and finish failed: backup ended but finished flag is not set.
2025-09-16T03:09:04-04:00: removing unfinished backup
2025-09-16T03:09:04-04:00: removing backup snapshot "/mnt/storage_nvme/ct/115/2025-09-16T07:00:06Z"
2025-09-16T03:09:04-04:00: TASK ERROR: backup ended but finished flag is not set.


PVE error for a CT.
Code:
()
INFO: starting new backup job: vzdump 113 --all 0 --storage pbs --notes-template '{{guestname}}' --fleecing 0 --mode stop --node pve --compress zstd
INFO: Starting Backup of VM 113 (lxc)
INFO: Backup started at 2025-11-02 10:52:30
INFO: status = running
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: ****
INFO: including mount point rootfs ('/') in backup
INFO: excluding bind mount point mp0 ('/mnt/plex/lvm') from backup (not a volume)
INFO: excluding bind mount point mp1 ('/mnt/Sabnzbd_Docker/incomplete') from backup (not a volume)
INFO: stopping virtual guest
INFO: creating Proxmox Backup Server archive 'ct/113/2025-11-02T15:52:30Z'
INFO: set max number of entries in memory for file-based backups to 1048576
INFO: run: /usr/bin/proxmox-backup-client backup --crypt-mode=encrypt --keyfd=17 pct.conf:/var/tmp/vzdumptmp3028621_113/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 113 --backup-time 1762098750 --entries-max 1048576 --repository proxmox@pbs@192.168.1.113:proxmox
INFO: Starting backup: ct/113/2025-11-02T15:52:30Z  
INFO: Client name: pve  
INFO: Starting backup protocol: Sun Nov  2 10:52:43 2025  
INFO: Using encryption key from file descriptor..  

INFO: Downloading previous manifest (Tue Sep  9 02:30:01 2025)  
INFO: Upload config file '/var/tmp/vzdumptmp3028621_113/etc/vzdump/pct.conf' to 'proxmox@pbs@192.168.1.113:8007:proxmox' as pct.conf.blob  
INFO: Upload directory '/mnt/vzsnap0' to 'proxmox@pbs@*****:proxmox' as root.pxar.didx  
INFO: processed 22.564 GiB in 1m, uploaded 56.268 MiB
INFO: unclosed encoder dropped
INFO: closed encoder dropped with state
INFO: unfinished encoder state dropped
INFO: finished encoder state with errors
INFO: catalog upload error - channel closed  
INFO: Error: inserting chunk on store 'proxmox' failed for 659b821fea4614f527039b8ea5f7e33a56c16a4676cddd75d5a8b48de4c8863d - mkstemp "/mnt/storage_nvme/.chunks/659b/659b821fea4614f527039b8ea5f7e33a56c16a4676cddd75d5a8b48de4c8863d.tmp_XXXXXX" failed: EUCLEAN: Structure needs cleaning
INFO: restarting vm
INFO: guest is online again after 131 seconds
ERROR: Backup of VM 113 failed - command '/usr/bin/proxmox-backup-client backup '--crypt-mode=encrypt' '--keyfd=17' pct.conf:/var/tmp/vzdumptmp3028621_113/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found '--exclude=/tmp/?*' '--exclude=/var/tmp/?*' '--exclude=/var/run/?*.pid' --backup-type ct --backup-id 113 --backup-time 1762098750 --entries-max 1048576 --repository proxmox@pbs@192.168.1.113:proxmox' failed: exit code 255
INFO: Failed at 2025-11-02 10:54:41
INFO: Backup job finished with errors
TASK ERROR: job errors
 
Last edited:
Hi. Is the filesystem in the PBS in a good condition? Especially about .chunks/659b directory.
(I'm not asking about free space).
 
Hi. Is the filesystem in the PBS in a good condition? Especially about .chunks/659b directory.
(I'm not asking about free space).
Do you mean with a SMART test like this?

Code:
root@pbs:~# smartctl -a /dev/sda
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.8.12-4-pve] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     ORICO-2TB
***
LU WWN Device Id: 0 000000 000000000
Firmware Version: VE1R910F
User Capacity:    2,048,408,248,320 bytes [2.04 TB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    Solid State Device
Form Factor:      2.5 inches
TRIM Command:     Available, deterministic
Device is:        Not in smartctl database 7.3/5319
ATA Version is:   ACS-3, ATA8-ACS T13/1699-D revision 6
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sun Nov  2 13:33:07 2025 EST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (    1) seconds.
Offline data collection
capabilities:                    (0x59) SMART execute Offline immediate.
                                        No Auto Offline data collection support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        No Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0002) Does not save SMART data before
                                        entering power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   1) minutes.
Extended self-test routine
recommended polling time:        (   2) minutes.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   100   100   050    Pre-fail  Always       -       0
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       802
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       105
161 Unknown_Attribute       0x0032   100   100   050    Old_age   Always       -       0
162 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       5548
163 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       3000
164 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       0
166 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       65
167 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       0
168 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       0
169 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       100
171 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       0
172 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       0
174 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       37
175 Program_Fail_Count_Chip 0x0032   100   100   000    Old_age   Always       -       0
181 Program_Fail_Cnt_Total  0x0022   100   100   000    Old_age   Always       -       11866619
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
194 Temperature_Celsius     0x0022   100   100   000    Old_age   Always       -       40
195 Hardware_ECC_Recovered  0x003a   100   100   000    Old_age   Always       -       5820
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
199 UDMA_CRC_Error_Count    0x0032   100   100   000    Old_age   Always       -       0
206 Unknown_SSD_Attribute   0x0032   100   100   000    Old_age   Always       -       0
207 Unknown_SSD_Attribute   0x0032   100   100   000    Old_age   Always       -       4
232 Available_Reservd_Space 0x0032   100   100   000    Old_age   Always       -       100
241 Total_LBAs_Written      0x0032   100   100   000    Old_age   Always       -       1590
242 Total_LBAs_Read         0x0032   100   100   000    Old_age   Always       -       99
249 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       1728
250 Read_Error_Retry_Rate   0x0032   100   100   000    Old_age   Always       -       1605

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 0
Note: revision number not 1 implies that no selective self-test has ever been run
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
 
Do you mean with a SMART test like this?
No :) . I mean filesystem, not hardware.
Edit: there may be some hardware-related issue, but SMART may not necessarily detect it.
Are there any warnings or errors in journalctl in PBS in the timeframe of this failing backup?
 
Last edited:
No :) . I mean filesystem, not hardware.
Edit: there may be some hardware-related issue, but SMART may not necessarily detect it.
Are there any warnings or errors in journalctl in PBS in the timeframe of this failing backup?
I was hoping to avoid the hardware check but I ended up doing it. While on a live CD I ran fsck on all partitions and it found some errors which I told it to fix. After the errors were fixed I ran fsck again to confirm it was all good and it came up with none. I should have taken a photo or something. The device booted up fine after that so I tried 2 backups on those CT.


Journalctl output on PBS while I tried 2 backups which both failed:

Code:
Nov 02 15:00:32 pbs proxmox-backup-proxy[676]: starting new backup on datastore 'proxmox' from ::ffff:*: "ct/113/2025-11-02T20:00:19Z"
Nov 02 15:02:21 pbs proxmox-backup-proxy[676]: TASK ERROR: connection error: connection reset: connection reset
Nov 02 15:03:44 pbs proxmox-backup-proxy[676]: starting new backup on datastore 'proxmox' from ::ffff:*: "vm/102/2025-11-02T20:03:41Z"
Nov 02 15:04:04 pbs postfix/qmgr[816]: 8DA06101596: from=<root@pbs.*>, size=750, nrcpt=1 (queue active)
Nov 02 15:04:04 pbs postfix/qmgr[816]: 132FE1004FB: from=<root@pbs.*>, size=750, nrcpt=1 (queue active)
Nov 02 15:04:04 pbs postfix/error[845]: 8DA06101596: to=<*>, relay=none, delay=25563, delays=25563/0.02/0/0.01, dsn=4.4.3, status=deferred (delivery temporarily suspended: Host or domain name not found. Name service error for name=gmail.com type=MX: Host not found, try again)
Nov 02 15:04:04 pbs postfix/error[846]: 132FE1004FB: to=<*>, relay=none, delay=11162, delays=11162/0.02/0/0, dsn=4.4.3, status=deferred (delivery temporarily suspended: Host or domain name not found. Name service error for name=gmail.com type=MX: Host not found, try again)
Nov 02 15:04:58 pbs login[851]: pam_unix(login:session): session opened for user root(uid=0) by (uid=0)
Nov 02 15:04:58 pbs systemd[1]: Created slice user-0.slice - User Slice of UID 0.
Nov 02 15:04:58 pbs systemd[1]: Starting user-runtime-dir@0.service - User Runtime Directory /run/user/0...
Nov 02 15:04:58 pbs systemd-logind[485]: New session 3 of user root.
Nov 02 15:04:58 pbs systemd[1]: Finished user-runtime-dir@0.service - User Runtime Directory /run/user/0.
Nov 02 15:04:58 pbs systemd[1]: Starting user@0.service - User Manager for UID 0...
Nov 02 15:04:58 pbs (systemd)[857]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Nov 02 15:04:58 pbs systemd[857]: Queued start job for default target default.target.
Nov 02 15:04:58 pbs systemd[857]: Created slice app.slice - User Application Slice.
Nov 02 15:04:58 pbs systemd[857]: Reached target paths.target - Paths.
Nov 02 15:04:58 pbs systemd[857]: Reached target timers.target - Timers.
Nov 02 15:04:58 pbs systemd[857]: Listening on dirmngr.socket - GnuPG network certificate management daemon.
Nov 02 15:04:58 pbs systemd[857]: Listening on gpg-agent-browser.socket - GnuPG cryptographic agent and passphrase cache (access for web browsers).
Nov 02 15:04:58 pbs systemd[857]: Listening on gpg-agent-extra.socket - GnuPG cryptographic agent and passphrase cache (restricted).
Nov 02 15:04:58 pbs systemd[857]: Listening on gpg-agent-ssh.socket - GnuPG cryptographic agent (ssh-agent emulation).
Nov 02 15:04:58 pbs systemd[857]: Listening on gpg-agent.socket - GnuPG cryptographic agent and passphrase cache.
Nov 02 15:04:58 pbs systemd[857]: Reached target sockets.target - Sockets.
Nov 02 15:04:58 pbs systemd[857]: Reached target basic.target - Basic System.
Nov 02 15:04:58 pbs systemd[857]: Reached target default.target - Main User Target.
Nov 02 15:04:58 pbs systemd[857]: Startup finished in 231ms.
Nov 02 15:04:58 pbs systemd[1]: Started user@0.service - User Manager for UID 0.
Nov 02 15:04:58 pbs systemd[1]: Started session-3.scope - Session 3 of User root.
Nov 02 15:04:58 pbs login[872]: ROOT LOGIN  on '/dev/pts/0'
Nov 02 15:05:08 pbs chronyd[650]: Selected source 12.205.28.193 (2.debian.pool.ntp.org)
Nov 02 15:06:20 pbs proxmox-backup-proxy[676]: TASK ERROR: backup ended but finished flag is not set.
 
pbs proxmox-backup-proxy[676]: TASK ERROR: connection error: connection reset: connection reset

I'm not an expert, but this error on the PBS side may mean that the client (PVE) quit for some reason...

I can see that you have already posted some task logs from the PVE, but can you post these ones from the tries which have failed recently (after fsck in the PBS)?

I hope someone more experienced will be able to help.
 
I'm not an expert, but this error on the PBS side may mean that the client (PVE) quit for some reason...

I can see that you have already posted some task logs from the PVE, but can you post these ones from the tries which have failed recently (after fsck in the PBS)?

I hope someone more experienced will be able to help.
I don't believe there is a network issue here because I can do all my other backups aside from the specific 3 CT and 1 VM without issue. These 3 CT and 1 VM have the largest bootdisk size. One is 128GB and the others are 64GB but the backup is using snapshot mode. "Data(pve)" (on proxmox VE) has 170GB+ and "local(pve)" has 135GB+ free. About 6 months ago I ran into an issue where local(pve) didn't have enough storage on it because I had been a bunch of ISOs on it.


Thanks for your help so far, at least the filesystem on the PBS is in better shape.

This is from one of the failed backups after the fsck:
Code:
()
INFO: starting new backup job: vzdump 113 --all 0 --storage pbs --notes-template '{{guestname}}' --compress zstd --fleecing 0 --mode stop --node pve
INFO: Starting Backup of VM 113 (lxc)
INFO: Backup started at 2025-11-02 15:00:19
INFO: status = running
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: Media-Docker11
INFO: including mount point rootfs ('/') in backup
INFO: excluding bind mount point mp0 ('/mnt/plex/lvm') from backup (not a volume)
INFO: excluding bind mount point mp1 ('/mnt/Sabnzbd_Docker/incomplete') from backup (not a volume)
INFO: stopping virtual guest
INFO: creating Proxmox Backup Server archive 'ct/113/2025-11-02T20:00:19Z'
INFO: set max number of entries in memory for file-based backups to 1048576
INFO: run: /usr/bin/proxmox-backup-client backup --crypt-mode=encrypt --keyfd=17 pct.conf:/var/tmp/vzdumptmp3603829_113/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 113 --backup-time 1762113619 --entries-max 1048576 --repository proxmox@pbs@192.168.1.113:proxmox
INFO: Starting backup: ct/113/2025-11-02T20:00:19Z  
INFO: Client name: pve  
INFO: Starting backup protocol: Sun Nov  2 15:00:32 2025  
INFO: Using encryption key from file descriptor..  

INFO: Downloading previous manifest (Tue Sep  9 02:30:01 2025)  
INFO: Upload config file '/var/tmp/vzdumptmp3603829_113/etc/vzdump/pct.conf' to 'proxmox@pbs@192.168.1.113:8007:proxmox' as pct.conf.blob  
INFO: Upload directory '/mnt/vzsnap0' to 'proxmox@pbs@192.168.1.113:8007:proxmox' as root.pxar.didx  
INFO: processed 22.533 GiB in 1m, uploaded 56.271 MiB
INFO: unclosed encoder dropped
INFO: closed encoder dropped with state
INFO: unfinished encoder state dropped
INFO: finished encoder state with errors
INFO: catalog upload error - channel closed  
INFO: Error: inserting chunk on store 'proxmox' failed for 659b821fea4614f527039b8ea5f7e33a56c16a4676cddd75d5a8b48de4c8863d - mkstemp "/mnt/storage_nvme/.chunks/659b/659b821fea4614f527039b8ea5f7e33a56c16a4676cddd75d5a8b48de4c8863d.tmp_XXXXXX" failed: ENOENT: No such file or directory
INFO: restarting vm
INFO: guest is online again after 128 seconds
ERROR: Backup of VM 113 failed - command '/usr/bin/proxmox-backup-client backup '--crypt-mode=encrypt' '--keyfd=17' pct.conf:/var/tmp/vzdumptmp3603829_113/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found '--exclude=/tmp/?*' '--exclude=/var/tmp/?*' '--exclude=/var/run/?*.pid' --backup-type ct --backup-id 113 --backup-time 1762113619 --entries-max 1048576 --repository proxmox@pbs@192.168.1.113:proxmox' failed: exit code 255
INFO: Failed at 2025-11-02 15:02:27
INFO: Backup job finished with errors
TASK ERROR: job errors
 
Last edited:
INFO: Error: inserting chunk on store 'proxmox' failed for 659b821fea4614f527039b8ea5f7e33a56c16a4676cddd75d5a8b48de4c8863d - mkstemp "/mnt/storage_nvme/.chunks/659b/659b821fea4614f527039b8ea5f7e33a56c16a4676cddd75d5a8b48de4c8863d.tmp_XXXXXX" failed: ENOENT: No such file or directory

Let's check this... What is the result of the command
stat /mnt/storage_nvme/.chunks/659b

Is it much different to other subdirectories in .chunks/ ?
Are there any files inside 659b?

Are you able to create some file there, e.g.
touch /mnt/storage_nvme/.chunks/659b/test-can-be-deleted
 
Let's check this... What is the result of the command
stat /mnt/storage_nvme/.chunks/659b

Is it much different to other subdirectories in .chunks/ ?
Are there any files inside 659b?

Are you able to create some file there, e.g.
touch /mnt/storage_nvme/.chunks/659b/test-can-be-deleted
Result on PBS is:
stat: cannot statx '/mnt/storage_nvme/.chunks/659b': No such file or directory
659b doesn't exist. 659a and 65a3 do.
touch: cannot touch '/mnt/storage_nvme/.chunks/659b/test-can-be-deleted': No such file or directory

I made a folder called 659b. CT113 went without a problem! So I looked through the logs for the other troublemaking CT/VM and made their equivalent folder. They all worked.

So the concern/question now is why couldn't the "chunks" folders be made on their own for these specific CT/VM?
 
  • Like
Reactions: Onslow
So the concern/question now is why couldn't the "chunks" folders be made on their own for these specific CT/VM?
All subdirectories in .chunks/ are created in the very beginning during creating the datastore.

Your lacking ones seemingly disappeared later. Maybe this is connected with the filesystem issues (which you fscked successfully later :) ).

One can recreate the missing ones later, if they vanished in the meantime. Paying attention to set the proper owner ("backup", if I remember), the group and the permissions (as the other subdirectories), because the backup process runs as this user and it needs to write inside.
 
Last edited:
  • Like
Reactions: plexrva and UdoB
All subdirectories in .chunks/ are created in the very beginning during creating the datastore.

Your lacking ones seemingly disappeared later. Maybe this is connected with the filesystem issues (which you fscked successfully later :) ).

One can recreate the missing ones later, if they vanished in the meantime. Paying attention to set the proper owner ("backup", if I remember), the group and the permissions (as the other subdirectories), because the backup process runs as this user and it needs to write inside.
So if these specific containers/VM were run as their own backup routine or something I could see it being that. They were run as part of a group with everything else passing. I would think (my limited knowledge) that would rule out a permissions issue.

Guess I better get email notifications backup and running for my backups. After a year and only having 1 other issue I wasn't to worried.

Thank you for your help!
 
So if these specific containers/VM were run as their own backup routine or something I could see it being that.
I'm afraid I don't quite understand this sentence :).

They were run as part of a group with everything else passing. I would think (my limited knowledge) that would rule out a permissions issue.
Me too. Especially that you have written "No such file or directory 659b doesn't exist".

This directory and some other ones, as you've written, were missing. CT113 had bad luck having a piece of data with the hash starting with 659b.

About notifications: of course it's worth having them (and read :) ).
It's good also to set scheduled verification jobs. For every backup after creation and then, for instance, 30 days after creation and so on.