Backup error: pipelined request failed

htcadmin

New Member
Mar 4, 2021
3
0
1
29
i have some servers with vms on Proxmox VE 6.3-2 and vm with PBS v 1.0-11.
On PBS i mount cifs storage (samba share) with command mount -o uid=34, gid=34 (for backup user permissions)
But then i can't upgrade my VE from 5.x to 6.x and do so bad migrate vms:
i copy vm's disks, create a new vm on new Proxmox server (v 6.3.-2) with same hardware options, then del default disk and rename my disk with actual data. VMs start correctly and i can backup it now with default VE instrument (snap or stop with compression), but if i try backup it with pbs i have next error:

INFO: starting new backup job: vzdump 105 --node mrogotnevProx --mode snapshot --remove 0 --storage PBS
INFO: Starting Backup of VM 105 (qemu)
INFO: Backup started at 2021-03-24 14:47:54
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: zabtest
INFO: include disk 'scsi0' 'forVMs:vm-105-disk-0' 32G
INFO: creating Proxmox Backup Server archive 'vm/105/2021-03-24T10:47:54Z'
INFO: starting kvm to execute backup task
INFO: started backup task '0938d952-f7b3-4198-9c16-d0419edc2587'
INFO: scsi0: dirty-bitmap status: created new
INFO: 0% (248.0 MiB of 32.0 GiB) in 3s, read: 82.7 MiB/s, write: 44.0 MiB/s
ERROR: backup write data failed: command error: write_data upload error: pipelined request failed: No such file or directory (os error 2)
INFO: aborting backup job
INFO: stopping kvm after backup task
trying to acquire lock...
OK
ERROR: Backup of VM 105 failed - backup write data failed: command error: write_data upload error: pipelined request failed: No such file or directory (os error 2)
INFO: Failed at 2021-03-24 14:47:59
INFO: Backup job finished with errors
TASK ERROR: job errors
 
hello,

i receive the same error on some vms.
My setup:
a cluster of three hosts (between 3 to 6 vms on each node) on every host is one vm that can't be backed up.
The PBS is one of the vms that is on one cluster node. The storage is mounted via cifs, command:
mount -t cifs -o user=backup,pass=xxxx,domain=domain,uid=34,noforceuid,gid=34,noforcegid //servername/backup /mnt/backups/

for example:
this vm has 5 Gigs of RAM, and 16Gigs of disk:
Code:
INFO:  34% (43.2 GiB of 127.0 GiB) in 50m 36s, read: 25.3 MiB/s, write: 19.1 MiB/s
INFO:  35% (44.5 GiB of 127.0 GiB) in 51m 29s, read: 24.9 MiB/s, write: 10.8 MiB/s
INFO:  35% (45.1 GiB of 127.0 GiB) in 52m 23s, read: 11.6 MiB/s, write: 9.0 MiB/s
ERROR: backup write data failed: command error: write_data upload error: pipelined request failed: Atomic rename on store 'backup_share' failed for chunk 4e5eae6dae838eda44d6303653e83b3292f0fb7537a9832717df1c7402888597 - Resource temporarily unavailable (os error 11)
INFO: aborting backup job
ERROR: Backup of VM 105 failed - backup write data failed: command error: write_data upload error: pipelined request failed: Atomic rename on store 'backup_share' failed for chunk 4e5eae6dae838eda44d6303653e83b3292f0fb7537a9832717df1c7402888597 - Resource temporarily unavailable (os error 11)
INFO: Failed at 2021-04-08 01:06:04
INFO: Starting Backup of VM 111 (qemu)
INFO: Backup started at 2021-04-08 01:06:04
INFO: status = running

I had anoter pbs on a hyper-v host that had a local disk mounted. That worked great with the same vm setup.

At the same Time there are other backups running.

Is there any advice?
 
Last edited:
your CIFS storage had a hickup and a rename operation failed, causing the backup to fail. you need to use a reliable storage as PBS datastore..
 
Thanks for the answer.

I rescheduled my backups. most of them are working right now.
But i have one other Problem with backup.
This is a backup from a Windows10 VM with installed Agent:
Code:
INFO:  95% (24.7 GiB of 26.0 GiB) in 22m 59s, read: 14.9 MiB/s, write: 14.9 MiB/s
INFO:  96% (24.9 GiB of 26.0 GiB) in 23m 16s, read: 16.5 MiB/s, write: 16.5 MiB/s
INFO:  97% (25.2 GiB of 26.0 GiB) in 23m 33s, read: 14.6 MiB/s, write: 14.6 MiB/s
INFO:  98% (25.4 GiB of 26.0 GiB) in 23m 53s, read: 13.4 MiB/s, write: 13.4 MiB/s
INFO:  99% (25.7 GiB of 26.0 GiB) in 24m 12s, read: 14.1 MiB/s, write: 14.1 MiB/s
INFO:  99% (25.8 GiB of 26.0 GiB) in 25m 12s, read: 1.4 MiB/s, write: 1.4 MiB/s
ERROR: backup write data failed: command error: write_data upload error: pipelined request failed: Bad file descriptor (os error 9)
INFO: aborting backup job
ERROR: Backup of VM 101 failed - backup write data failed: command error: write_data upload error: pipelined request failed: Bad file descriptor (os error 9)
INFO: Failed at 2021-04-10 06:25:22
INFO: Starting Backup of VM 106 (qemu)
INFO: Backup started at 2021-04-10 06:25:22
INFO: status = running

What does "Bad file descriptor (os error 9) mean?

best regards
 
that's hard to tell.. any indication on the server side?
 
Hello,

i'm sorry no.
after this backup another backup starts to the same destination in this task, that is working fine.
 
Hi all,

I've got a problem with backing up one unprivileged container (other privileged ones and VMs are backing up fine). This is fairly small container only couple of GBs.

When the container is stopped I'm getting this (bad file descriptor error, exit code:2 error), same error when snapshot is taken etc:
INFO: starting new backup job: vzdump 100 --compress 0 --mode stop --storage gdrive --node vmh --remove 0 INFO: filesystem type on dumpdir is 'fuse.rclone' -using /var/tmp/vzdumptmp2462827_100 for temporary files INFO: Starting Backup of VM 100 (lxc) INFO: Backup started at 2022-01-13 12:32:55 INFO: status = running INFO: backup mode: stop INFO: ionice priority: 7 INFO: CT Name: srv-pihole INFO: including mount point rootfs ('/') in backup INFO: stopping virtual guest INFO: creating vzdump archive '/mnt/pve/gdrive/dump/vzdump-lxc-100-2022_01_13-12_32_55.tar' INFO: Total bytes written: 0 (0B, ?/s) INFO: tar: -: Cannot write: Bad file descriptor INFO: tar: Error is not recoverable: exiting now INFO: restarting vm INFO: guest is online again after 6 seconds ERROR: Backup of VM 100 failed - command 'set -o pipefail && lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar cpf - --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' --one-file-system '--warning=no-file-ignored' '--directory=/var/tmp/vzdumptmp2462827_100' ./etc/vzdump/pct.conf ./etc/vzdump/pct.fw '--directory=/mnt/vzsnap0' --no-anchored '--exclude=lost+found' --anchored '--exclude=./tmp/?*' '--exclude=./var/tmp/?*' '--exclude=./var/run/?*.pid' ./ >/mnt/pve/gdrive/dump/vzdump-lxc-100-2022_01_13-12_32_55.dat' failed: exit code 2 INFO: Failed at 2022-01-13 12:33:02 INFO: Backup job finished with errors TASK ERROR: job errors

any thouhts?

Thanks
 
given the log I assume you are trying to backup directly onto some sort of cloud storage - that is a bad idea, and will likely break in various ways. I'd suggest backing up to local storage and using a hookscript to move the finished backup archive to its final destination.
 
given the log I assume you are trying to backup directly onto some sort of cloud storage - that is a bad idea, and will likely break in various ways. I'd suggest backing up to local storage and using a hookscript to move the finished backup archive to its final destination.
Thanks @fabian , other CTs and VMs are backing up just fine to that cloud storage, anyway - I've tried to back it up to local storage and get same problem.

INFO: starting new backup job: vzdump 100 --compress 0 --storage local --node vmh --mode snapshot --remove 0 INFO: Starting Backup of VM 100 (lxc) INFO: Backup started at 2022-01-13 13:36:34 INFO: status = running INFO: CT Name: srv-pihole INFO: including mount point rootfs ('/') in backup INFO: backup mode: snapshot INFO: ionice priority: 7 INFO: create storage snapshot 'vzdump' WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap_vm-100-disk-0_vzdump" created. WARNING: Sum of all thin volume sizes (155.00 GiB) exceeds the size of thin pool pve/data and the amount of free space in volume group (<16.00 GiB). INFO: creating vzdump archive '/var/lib/vz/dump/vzdump-lxc-100-2022_01_13-13_36_34.tar' INFO: tar: ./var/log/syslog.1: Read error at byte 0, while reading 3072 bytes: Input/output error INFO: tar: ./var/log/auth.log.1: Read error at byte 0, while reading 5632 bytes: Input/output error INFO: Total bytes written: 2456719360 (2.3GiB, 88MiB/s) INFO: tar: Exiting with failure status due to previous errors INFO: cleanup temporary 'vzdump' snapshot Logical volume "snap_vm-100-disk-0_vzdump" successfully removed ERROR: Backup of VM 100 failed - command 'set -o pipefail && lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar cpf - --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' --one-file-system '--warning=no-file-ignored' '--directory=/var/lib/vz/dump/vzdump-lxc-100-2022_01_13-13_36_34.tmp' ./etc/vzdump/pct.conf ./etc/vzdump/pct.fw '--directory=/mnt/vzsnap0' --no-anchored '--exclude=lost+found' --anchored '--exclude=./tmp/?*' '--exclude=./var/tmp/?*' '--exclude=./var/run/?*.pid' ./ >/var/lib/vz/dump/vzdump-lxc-100-2022_01_13-13_36_34.dat' failed: exit code 2 INFO: Failed at 2022-01-13 13:37:02 INFO: Backup job finished with errors TASK ERROR: job errors
 
that's a different error - are you sure your disks are alright?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!