[SOLVED] Error restoring from S3 - mkstemp ENOENT in local cache

jrmann100

New Member
Sep 21, 2025
3
0
1
Hi,

I am very excited to have configured PBS (4.0.15) in my PVE (9.0.10) datacenter with AWS S3 as a backend. While restoring certain (verified) backups works perfectly, I am also consistently getting the following error when trying to restore a particular VM, which looks like it might be related to the datastore's local cache.

I would welcome any troubleshooting suggestions - please let me know if there is additional context I can provide.

Code:
  Logical volume "vm-206-cloudinit" created.
  Logical volume pve/vm-206-cloudinit changed.
new volume ID is 'local-lvm:vm-206-cloudinit'
  Logical volume "vm-206-disk-0" created.
new volume ID is 'main-lvm:vm-206-disk-0'
restore proxmox backup image: /usr/bin/pbs-restore --repository pve@pbs@192.168.88.2:aws-s3-store vm/106/2025-09-20T18:46:56Z drive-scsi0.img.fidx /dev/main-lvm/vm-206-disk-0 --verbose --format raw
connecting to repository 'pve@pbs@192.168.88.2:aws-s3-store'
using up to 4 threads
open block backend for target '/dev/main-lvm/vm-206-disk-0'
starting to restore snapshot 'vm/106/2025-09-20T18:46:56Z'
download and verify backup index
fetching up to 16 chunks in parallel
restore failed: inserting chunk on store 'aws-s3-store' failed for 2f095998b83080824afb0cc4d993526973288c770348b9d4cc4ce785b80b2ca2 - mkstemp "/mnt/datastore/aws-s3-store-local-cache/.chunks/2f09/2f095998b83080824afb0cc4d993526973288c770348b9d4cc4ce785b80b2ca2.tmp_XXXXXX" failed: ENOENT: No such file or directory
  Logical volume "vm-206-cloudinit" successfully removed.
temporary volume 'local-lvm:vm-206-cloudinit' successfully removed
  Logical volume "vm-206-disk-0" successfully removed.
temporary volume 'main-lvm:vm-206-disk-0' successfully removed
error before or during data restore, some or all disks were not completely restored. VM 206 state is NOT cleaned up.
TASK ERROR: command '/usr/bin/pbs-restore --repository pve@pbs@192.168.88.2:aws-s3-store vm/106/2025-09-20T18:46:56Z drive-scsi0.img.fidx /dev/main-lvm/vm-206-disk-0 --verbose --format raw' failed: exit code 255
 
What is the output of stat /mnt/datastore/aws-s3-store-local-cache/.chunks/2f09/? Did you maybe manually interact with the local store cache?
 
Code:
root@pbs:~# stat /mnt/datastore/aws-s3-store-local-cache/.chunks/2f09/
  File: /mnt/datastore/aws-s3-store-local-cache/.chunks/2f09/
  Size: 4096          Blocks: 8          IO Block: 4096   directory
Device: 252,8    Inode: 800675      Links: 2
Access: (0755/drwxr-xr-x)  Uid: (   34/  backup)   Gid: (   34/  backup)
Access: 2025-09-20 21:54:38.799190396 +0000
Modify: 2025-09-20 21:54:32.100213161 +0000
Change: 2025-09-20 21:54:32.100213161 +0000
 Birth: 2025-09-20 19:20:40.857586788 +0000

After the command failed twice, I looked at .chunks and observed that it was empty. Suspecting the problem may have been because the folders did not exist, I manually created the chunks using this command:

Bash:
for i in $(seq 0 65535); do
    dir=$(printf "%04x" $i)
    mkdir -p $dir
    chown backup:backup $dir
    chmod 755 $dir
done

This did not result in a different outcome.
 
Code:
root@pbs:~# stat /mnt/datastore/aws-s3-store-local-cache/.chunks/2f09/
  File: /mnt/datastore/aws-s3-store-local-cache/.chunks/2f09/
  Size: 4096          Blocks: 8          IO Block: 4096   directory
Device: 252,8    Inode: 800675      Links: 2
Access: (0755/drwxr-xr-x)  Uid: (   34/  backup)   Gid: (   34/  backup)
Access: 2025-09-20 21:54:38.799190396 +0000
Modify: 2025-09-20 21:54:32.100213161 +0000
Change: 2025-09-20 21:54:32.100213161 +0000
 Birth: 2025-09-20 19:20:40.857586788 +0000

After the command failed twice, I looked at .chunks and observed that it was empty. Suspecting the problem may have been because the folders did not exist, I manually created the chunks using this command:

Bash:
for i in $(seq 0 65535); do
    dir=$(printf "%04x" $i)
    mkdir -p $dir
    chown backup:backup $dir
    chmod 755 $dir
done

This did not result in a different outcome.
Check your premissions on the .chunks folder and adapt the permissions on the folders contained within it.
both should be owned by user/group backup:backup with permissions 0750. Further, please check for eventual mount points via mount? E.g. did you mount something over the cache location? Did you put the datastore into maintenance mode offline before acting on the datastore cache?
 
Thanks for your support - I tried these troubleshooting steps but was unsuccessful.

It seems like maybe the datastore just wasn't initialized correctly - I removed it (while keeping the data on S3) and tried again with this command:
proxmox-backup-manager datastore create --backend type=s3,client=aws-s3,bucket=pbs-jnet aws-s3-store /mnt/datastore/aws-s3-store-local-cache

Which seemed to do the trick, as it set up all the chunk folders properly from the start.
For now I will consider this issue resolved!