This is why Proxmox Backup Server exist.
Backups are all full, but data are always dedup = same space efficiency as snapshots.
Backups are really fast after the first backup, thanks to qemu dirty bitmap which allow read and backup only changed...
Snapshots are not backups.
Proxmox Backup Server is made for daily, even more, backups, as only "différentiel" data are saved on the backup drive, like your daily snapshot, then to test older backups, restore wanted backup as a new VM.
Not yet using fleecing.
Try and report.
Recommended way is Local PBS then External PBS Remote Sync from.
Fleecing is more recommended to help slow storage like HDD based Local PBS.
there isn't recommended size,
because it depends :
in worst case, where VM writes many data and connection to PBS is too slow.
if running full , backup will fail.
From the docs :
In some masquerade setups with firewall enabled, conntrack zones might be needed for outgoing connections.
Otherwise the firewall could block outgoing connections since they will prefer the POSTROUTING of the VM bridge (and not...
Guys, short update on this:
Looks like Acronis has acknowlegded this is something related to the boot media, at least their R&D team is working on the case now.
I'll let you know the outcome when case is closed.
you're right, but this can be solved on the client side e.g. with:
proxmox-backup-client snapshot list ct/ID --output-format json | jq 'sort_by(."backup-time")'
you can ofc open a bug report (https://bugzilla.proxmox.com) , but i guess this...
The subnet specifications you should get from your IP provider, you can’t just willy nilly change them. If the IP are both in the same subnet, you can receive traffic on both interfaces, but traffic will always be sent out the interface with the...
EDIT: indeed in a cluster, /etc/pve is synchronised from others hots.
For non-cluster : You can't as /etc/pve is mount point filesystem for database stored data.
So can't be restored offline this way...
memory is not reserved at vm start (until you define static memory hugage in vm conf directly). so it can be dynamically allocated to a different vm.
then, if a vm is reserving a memory page, it's reserved. Note that windows is allocating all...
you can see this in the "chunk_upload_stats" member of the index.json blob (the backup manifest, contained in each snapshot). this is added by the server when the backup snapshot is finalized. the numbers there *only count what has been uploaded...