if you don't use fleecing, the backup will sit between guest I/O and the actual volumes. if the PBS system is too slow, it will cause I/O issues in the VM.. without more details, nobody will be able to tell you whether that is the case though..
given that this is over 10 year old hardware - maybe its UEFI support is buggy? you could try disabling EFI boot in the system firmware and installing in legacy/CSM mode..
the reasons why DCOs were not chosen were already given above by Thomas.. it also doesn't prevent projects being stuck on a license they no longer want to have, because contributors from 10 years ago cannot agree to such a change, because they are no longer available/reachable for one reason or...
that is not true. see point 2.3 of the CLA, which states:
So anything you contribute today to any of our AGPLv3+ repos may be relicensed in the future, but must also be still licensed AGPLv3+ in any case.
the only thing that counts is whether the VM has been running (live migration is okay as well) since the last snapshot was made. whether you reboot the PVE node or the PBS system does not matter.
you can see in your log:
900: 2024-10-17 12:03:23 INFO: 1% (6.1 GiB of 600.0 GiB) in 3m 29s...
the first backup of a VM and every backup after the VM has been stopped will be "slow" like that. there are some other factors that can also affect the bitmap (like the last snapshot on the PBS side being corrupt, or switching encryption mode or key, or ..).
yes, the issue is with the source dataset, as that is the one getting snapshotted for replication..
zfs-share/vm-120-disk-0 refreservation 65.0G local
zfs-share/vm-120-disk-0 usedbydataset 48.5G -
so your dataset currently uses 48.5G, and it's reservation is 65G. that means creating a...
I think the issue with that is that we (as developers) are not convinced that the approach by CRIU will ever bear fruit in a meaningful way for generic containers. by it's very design it can only ever work with a ton of restrictions and footnotes and not for "arbitrary groups of processes doing...
on https://bugzilla.proxmox.com :)
I don't think this should be too hard to implement, and the use case is a valid one (although it could still be confusing if multiple such jobs covering the same user exist, or a global and a user one with conflicting settings).
for most setups, setting up pruning once for the whole datastore works, yes. and it's far easier to understand than mixing and matching different levels and sources of pruning. we could of course implement per-user prune jobs as well, and let those only affect owned groups of that user, but that...
if you want to setup datastore-wide server-side pruning, you need a highly privileged user. but that only needs to be set up once - the user can still manually prune their own groups. there is no scheduled pruning limited to a user's owned backups, that can only be done manually (via the...
if you give a user DatastorePrune , they can only prune their own backups. a prune job will affect the whole datastore (or namespace), so it requires more privileges.
yes, the vzdump.conf file just sets the defaults in the backend in case no explicit value for that option is given. since the UI will always force you to select a storage, the storage option in vzdump.conf won't ever have an effect there. it will only affect invocations of the vzdump CLI tool or...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.