you could also think about not backing up directly onto your external disks, but use them as sync targets and use a permanently available (smaller) datastore as intermediate "buffer".
if you "buffer" enough snapshots on the datastore that is not external, both external disks get...
the default timeout is 3 hours, sounds like you just got over it? did the backup size increase correspondingly as well? you can increase the lock timeout ('lockwait' in /etc/vzdump.conf), but I'd first try to find out why it takes longer..
a bridge with no physical port should work just fine (alternatively, a bridge WITH a physical port but no address on the hypervisor should also work fine, in case that is what you attempted ;)) - could you post your network config and include pveversion -v output?
you can either encrypt the storage on which you configure your datastores (e.g. using LUKS/cryptsetup), or use the built-in encryption feature to encrypt individual snapshots at rest (some metadata needs to remain unencrypted, but all the data chunks are fully encrypted)...
so the folder (and possibly some other things) is likely owned by the default unprivileged user 100033. you can use pct mount to mount the containers' FS and correct the owners (all files/dirs owned by user or group 100033 need to be owned by user 33 in your case).
die storage.cfg ist cluster-weit. du kannst aber eintraege auf einzelne oder mehrere nodes einschraenken wenn der storage nicht ueberall verfuegbar ist.. wenn also der zfs pool/das dataset schon existiert und nicht neuangelegt werden soll, kannst du den dazugehoerigen storage.cfg eintrag in der...
are these VMs especially active? or in some other way different from the rest? the symptoms point to a network issue, but that might just be what shows up in the logs and the root cause is something else..
repository is the combination of user/token ID, PBS host and datastore:
USER@REALM@HOST:DATASTORE, e.g. root@firstname.lastname@example.org:mydatastore. you should find all that information on the PVE side in your storage config entry ;)
der knoten laesst sich wie folgt aufloesen:
pve00: thinpool mit namen foobar
pve01: ebenfalls thinpool mit namen foobar
storage.cfg: EIN storage eintrag fuer den thinpool foobar
die config ist clusterweit, und PVE weiss welche storages auf jedem node gleichen inhalt haben (==shared) und...