We have the same problem. Network monitoring tool show no issue. Source and target machine are always available and responding yet we get "got timeout" error intermittently for no apparent reason.
Not exactly. PBS is running inside different Proxmox PVE VMs. The VMs are replicated using pvesr every 15min while the PBS backup job run daily at 00:00.
Here's a picture to better understand our workflow
Let me know if you need more info !
Thanks
I've updated both to the latest version. The NFS share is mounted using the fstab in the VM
prd-nas:/mnt/Pool/Backups/PBS /mnt/prd-nas/Backups/PBS nfs defaults 0 0
Both PBS are in different VMs on different Proxmox servers running ZFS. They both backup on their own NFS share on different NAS servers. Both location are replicated to each other every night. No other errors in the logs other than the one I mentionned in my OP.
Here are the version of both...
We have some intermittent sync issue when syncing between 2 PBS.
Here's the log:
2021-10-31T03:01:27-04:00: sync group vm/108 failed - unable to acquire lock on snapshot directory "/mnt/nas/vm/108/2021-10-30T04:04:17Z" - internal error - tried creating snapshot that's already in use
It's...
Maybe I don't understand correctly but this seems like the previous solution you proposed which is for when RESTORING a backup. What about limiting the backup when the datacenter backup job is running and WRITING to our backup storage ?
Hello,
Is there a way to limit the speed at which PBS backups our servers ? I'm asking because it takes all our NIC bandwith which make our VM pretty much unreachable during the backup schedule.
I've tried to edit /etc/vzdump.conf but it seems to ignore it when using PBS.
Thanks
I did read the upgrade note. I was under the impression that the CT was a 18.04 (as all our other CT) but turned out I forgot to upgrade it and it was still stuck at 16.04. I will redeploy it as a VM instead.
Thanks for the help
We updated 2 of our hypervisor from 6.4 to 7.0. Some containers started fine while other refused to start even after a backup/restore.
Here's the error I'm getting
lxc-start -n 133 -F -l DEBUG -o /tmp/lxc-133.log
Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted
[!!!!!!]...
I have a garbage collector task where the source datasource reside only. Is this the right way to do it ? I was under the impression that since we sync the original datastore, the changes from the garbage collector would also follow to the remote datastore
Hello,
I have 2 PBS server with 2 datastores each. Every datastore is synced to the other server like so
From my understanding, both servers should have the same used space since they both hold the same data ?
Right now Server 1 is holding twice the amount of data than Server 2
Server 1...
Not sure I follow you there. Moving and rsync the snapshot folder and chunk folder is the same thing no ?
If my VM with ID 105 becomes the ID 135 and I want to keep the old backups and migrate them under the ID 135 and then sync them to a remote PBS, what is the best way to proceed ?
From my...
Gotcha. Regarding my case where I want to keep the backups from an oldest VM under a different ID, is there a better way to do it or moving then rsync the snapshots and chunk folder is the best way since the sync is ignoring the oldest snapshots ?
Make sense. I think for the time being I will manually sync it to the remote server by hand with rsync since deleting and creating the sync job again would sync all my backups (couple of TB) which is not ideal.
Should I open a request in bugzila for this ?
Thanks for the help
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.