PBS backups failing

inxsible

Active Member
Feb 6, 2020
139
8
38
I have PBS running in a LXC container on my PVE server. The datastore is mounted as NFS in PVE and made available to the PBS container as a mount point. All this was working fine until the 8th of Nov, when I updated my TrueNAS server from 13.0-U2 to 13.0-U3. This involved a reboot of the NAS and since then I am getting the following error on all my backups -- including VMs, CTs and desktop backups (via proxmox-backup-client)

Code:
INFO: Downloading previous manifest (Tue Nov  8 08:06:31 2022)

INFO: Upload config file '/var/tmp/vzdumptmp252728_101/etc/vzdump/pct.conf' to 'root@pam@192.168.1.33:8007:pbsDatastore' as pct.conf.blob

INFO: Upload directory '/mnt/vzsnap0' to 'root@pam@192.168.1.33:8007:pbsDatastore' as root.pxar.didx

INFO: catalog upload error - unable to get shared lock - EIO: I/O error

INFO: Error downloading .didx from previous manifest: broken pipe

INFO: Error: broken pipe

ERROR: Backup of VM 101 failed - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup '--crypt-mode=none' pct.conf:/var/tmp/vzdumptmp252728_101/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found '--exclude=/tmp/?*' '--exclude=/var/tmp/?*' '--exclude=/var/run/?*.pid' --backup-type ct --backup-id 101 --backup-time 1668117617 --repository root@pam@192.168.1.33:pbsDatastore' failed: exit code 255

INFO: Failed at 2022-11-10 16:00:30




I searched and read these related threads but I am not able to understand what needs to be done in order to fix this
https://forum.proxmox.com/threads/unable-to-get-exclusive-lock-eio-i-o-error.91303/
https://forum.proxmox.com/threads/b...ommand-error-unable-to-get-shared-lock.81966/

Can someone please help me fix this?
 
I think I know the problem but can't find again the thread fixing it.

Problem could be that NFS using TrueNAS after that upgrade won't allow creating locks. That could be fixed by changing the mount options of the NFS share when manually mounting the NFS share using fstab. But I don't remember what to type in there.

Edit: You could try to mount that NFS share with local_lock=all as option.
 
Last edited:
I think I know the problem but can't find again the thread fixing it.

Problem could be that NFS using TrueNAS after that upgrade won't allow creating locks. That could be fixed by changing the mount options of the NFS share when manually mounting the NFS share using fstab. But I don't remember what to type in there.

Edit: You could try to mount that NFS share with local_lock=all as option.
Thanks @Dunuin

I have just added the NFS share from the webUI (in proxmox and in pbs) and don't have the NFS mounted via the fstab. Are you saying that I would have to change how the NFS share was mounted in proxmox or in pbs ?
 
Thanks @Dunuin

I have just added the NFS share from the webUI (in proxmox and in pbs) and don't have the NFS mounted via the fstab. Are you saying that I would have to change how the NFS share was mounted in proxmox or in pbs ?
Jup, as you can't change mount option when the share is managed by PVE/PBS.
 
Hi,
Jup, as you can't change mount option when the share is managed by PVE/PBS.
you can set mount options if the NFS is managed by PVE, just not via UI. On the CLI it's e.g. pvesm set <storage ID> --options local_lock=all.

I have just added the NFS share from the webUI (in proxmox and in pbs) and don't have the NFS mounted via the fstab. Are you saying that I would have to change how the NFS share was mounted in proxmox or in pbs ?
I don't think PBS supports mounting NFS storages via UI/API. Are you sure you didn't mount the NFS manually there?
 
  • Like
Reactions: Dunuin
Hi,

you can set mount options if the NFS is managed by PVE, just not via UI. On the CLI it's e.g. pvesm set <storage ID> --options local_lock=all.


I don't think PBS supports mounting NFS storages via UI/API. Are you sure you didn't mount the NFS manually there?
Thanks, I'll try to add that option and see if that works.

I had the NFS share mounted via UI in PVE. PBS runs as a container in the PVE instance and is provided the same NFS share as a mountpoint configured in the container's config file. But the /etc/fstab on the PBS container is still blank.
Code:
[pbs: ~]── - cat /etc/fstab
# UNCONFIGURED FSTAB FOR BASE SYSTEM
[pbs: ~]── -
Unless you call setting a mountpoint as manual.
 
Last edited:
I tried adding the local_lock=all option but that doesn't seem to work.
Here's my storage config before and after adding the option via the command
Code:
 pvesm set pbsDatastore --options local_lock=all

Code:
[proxmox: ~]── - cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content iso,vztmpl,backup

zfspool: local-zfs
    pool rpool/data
    content rootdir,images
    sparse 1

nfs: media
    export /mnt/tank/media
    path /mnt/pve/media
    server freenas
    content iso
    prune-backups keep-all=1

dir: downloads
    path /mnt/pve/media/downloads
    content iso,vztmpl,backup
    prune-backups keep-all=1
    shared 0

nfs: pbsDatastore
    export /mnt/tank/pbsDatastore
    path /mnt/pve/pbsDatastore
    server freenas
    content iso
    prune-backups keep-all=1

pbs: pbs
    datastore pbsDatastore
    server 192.168.1.33
    content backup
    fingerprint xxxxxxxxxxxxxxxxx
    prune-backups keep-all=1
    username root@pam

[proxmox: ~]── -

Code:
[proxmox: ~]── - cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content iso,vztmpl,backup

zfspool: local-zfs
    pool rpool/data
    content rootdir,images
    sparse 1

nfs: media
    export /mnt/tank/media
    path /mnt/pve/media
    server freenas
    content iso
    prune-backups keep-all=1

dir: downloads
    path /mnt/pve/media/downloads
    content iso,vztmpl,backup
    prune-backups keep-all=1
    shared 0

nfs: pbsDatastore
    export /mnt/tank/pbsDatastore
    path /mnt/pve/pbsDatastore
    server freenas
    content iso
    options local_lock=all
    prune-backups keep-all=1

pbs: pbs
    datastore pbsDatastore
    server 192.168.1.33
    content backup
    fingerprint xxxxxxxxxxxxxxxxx
    prune-backups keep-all=1
    username root@pam

[proxmox: ~]── -

I then tried to run my backup cron and it failed. So I rebooted the PBS container and tried again with the same result. Do I have to reboot the entire PVE server?
 
I then tried to run my backup cron and it failed. So I rebooted the PBS container and tried again with the same result. Do I have to reboot the entire PVE server?
You can check with mount if the option is actually set. I think you need to unmount the NFS after setting the option. It should get re-mounted automatically after a few seconds with the option afterwards.
 
You can check with mount if the option is actually set. I think you need to unmount the NFS after setting the option. It should get re-mounted automatically after a few seconds with the option afterwards.
My mistake, I should have mentioned that I did check the option was set using
Code:
 mount | grep -i freenas
and the result was
Code:
[proxmox: ~]── - mount | grep -i freenas
freenas:/mnt/tank/media on /mnt/pve/media type nfs (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.3,mountvers=3,mountport=838,mountproto=udp,local_lock=none,addr=192.168.1.3)
freenas:/mnt/tank/pbsDatastore on /mnt/pve/pbsDatastore type nfs (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.3,mountvers=3,mountport=762,mountproto=udp,local_lock=all,addr=192.168.1.3)
[proxmox: ~]── -

So the local_lock option is correctly set up on the pbsDatastore. I also have another share from the same TrueNAS server which I am using as a media share for my jellyfin container --- I didn't change the option on that share because I didn't think it would matter to pbs as pbs doesn't know about that share anyway.

I just ran the job once again and the error is the same still.

Code:
....
....
....
INFO: catalog upload error - unable to get shared lock - EIO: I/O error
INFO: Error downloading .didx from previous manifest: broken pipe
INFO: Error: broken pipe
INFO: cleanup temporary 'vzdump' snapshot
ERROR: Backup of VM 111 failed - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup '--crypt-mode=none' pct.conf:/var/tmp/vzdumptmp3129866_111/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found '--exclude=/tmp/?*' '--exclude=/var/tmp/?*' '--exclude=/var/run/?*.pid' --backup-type ct --backup-id 111 --backup-time 1668783817 --repository root@pam@192.168.1.33:pbsDatastore' failed: exit code 255
INFO: Failed at 2022-11-18 09:03:51
INFO: Backup job finished with errors
TASK ERROR: job errors
 
I have a Archlinux desktop which I backup to PBS as well and I removed the systemd automount for the pbsDatastore and manually mounted the share using local_lock=all option. I checked it and it was set correctly. But running a backup resulted in the same error. So it's not working from the CTs, VMs or even other clients.

Are there other options that need to be set as well?
 
Any other suggestions here please?

My pveversion just in case:
Code:
[proxmox: ~]── - pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.2-11 (running version: 7.2-11/b76d3178)
pve-kernel-5.15: 7.2-14
pve-kernel-helper: 7.2-14
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.15.60-1-pve: 5.15.60-1
pve-kernel-5.15.53-1-pve: 5.15.53-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-4
libpve-guest-common-perl: 4.1-4
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.2-10
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.7-1
proxmox-backup-file-restore: 2.2.7-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-3
pve-docs: 7.2-2
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-6
pve-firmware: 3.5-6
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-5
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.6-pve1
[proxmox: ~]── -

and my proxmox-backup-server details:
Code:
[pbs: ~]── - proxmox-backup-manager versions --verbose
proxmox-backup                unknown      running kernel: 5.13.19-2-pve
proxmox-backup-server         2.2.7-1      running version: 2.2.7
ifupdown2                     3.1.0-1+pmx3
libjs-extjs                   7.0.0-1
proxmox-backup-docs           2.2.7-1
proxmox-backup-client         unknown
proxmox-mini-journalreader    1.2-1
proxmox-offline-mirror-helper 0.5.0-1
proxmox-widget-toolkit        3.5.1
pve-xtermjs                   4.16.0-1
smartmontools                 7.2-pve3
zfsutils-linux                2.1.6-pve1
[pbs: ~]── -
 
Can you create files/folders on the share from within the container? If nothing else works you could still try creating a privileged container, enabling the NFS feature for it and mounting the NFS share directly in the container. That said, having PBS as a container is not really a supported setup in the first place. Some people on the forum reported using a PBS VM (which also isn't really supported, but might be less trouble than a container). Or you could install PBS alongside Proxmox VE on the host.
 
Can you create files/folders on the share from within the container? If nothing else works you could still try creating a privileged container, enabling the NFS feature for it and mounting the NFS share directly in the container. That said, having PBS as a container is not really a supported setup in the first place. Some people on the forum reported using a PBS VM (which also isn't really supported, but might be less trouble than a container). Or you could install PBS alongside Proxmox VE on the host.
Thanks for your reply @fiona

I can create/edit/delete files and folders from within the container. I tried this via ssh and from within the browser console.
As for the setup not being supported, I understand that. Installing PBS in a container just seemed to be good in terms of separation of concern especially because I was evaluating PBS at the time. I didn't want to mix the PVE and PBS together but I'll look into other options now.

I'll try a privileged container but I am just confused as to why it suddenly stopped working from my unprivileged container.
 
I tried a new privileged container over the Thanksgiving weekend and it exhibited the same behavior.

Today however, I updated my PVE and my PBS container. The packages updated in the PBS container were:
Code:
Calculating upgrade... Done

The following NEW packages will be installed:

proxmox-mail-forward

The following packages will be upgraded:

pbs-i18n proxmox-archive-keyring proxmox-backup-docs proxmox-backup-server proxmox-widget-toolkit

5 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.

Need to get 30.1 MB of archives.

And lo and behold, my old unprivileged container started working again -- as in my systemd services from my Archlinux desktop had already sent a new backup to it. Then I manually ran a single container backup and it ran fine as well. Then I manually ran my backup jobs and they executed fine as well.

So, I believe it was one of the above updated packages (most likely the proxmox-backup-server package) that fixed whatever regression had occurred in the previous version.
 
I wonder if I remount the NFS without the local_lock=all option --- will it still work.

I'll test that after I backup all of my containers once as I have not had a backup since almost a month. :)
 
Removed the local_lock=all option from the pbsDatastore nfs mount and the backups still work. I am back to where I started. Everything works as it did before the Nov 8 upgrade. The new upgrade that I did on Nov 29 has fixed everything back to how it was.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!