As Spirit said above, it needs to re-read the whole VM (and it does that quick, 130 MBps).
Then write only the diffs to PBS.
On next backup (with dirty maps working), it won't have to re-read the whole VM, it'll use the dirty maps.
Looking at the log you sent, this backup is a "full" backup, not differential, at least full-re-read of the VM (as other suggested).
It's not a differential and fast backup using dirty maps (that are destroyed on VM restart);
Your full backup:
INFO: efidisk0: dirty-bitmap status: created new...
They're nested if you have a parent repository that points to the zpool.
I don't think repo and dataset are tightened together (ie: if you remove a repo, it won't touch the dataset/pool), because the repo can be something else than a ZFS pool/dataset (NFS mount, local LVM or USB disk).
If your initial zpool is called "myPool" then this should work and you want to create two datasets named "repo_one" and "repo_two" then this should work:
#zfs create myPool/repo_one
#zfs create myPool/repo_two
Ultra quick reference...
I don't get why, I might be missing something.
You create a pool, then you create datasets inside the pool (one per datastore) and you map each datastore to one dataset.
You don't have to know the size, thanks to the datasets.
Thank you.
So if I understand correctly, if I don't check "remove vanished" and setup GC in a different way than the source PBS (120 days instead of 60 days for example), I could have 60 days of backup retention on the source PBS and 120 on the remote one?
Hi.
If I have several PBS in different places, can I remote sync all of them to a single remote PBS somewhere else?
From what I understood of remote sync in the documentation, all I have to do is created one datastore per source PBS on the single remote PBS.
Then set one sync job per...
Looks like i have to do this too, because I'm facing the same issue: >24 hours of garbage collection for 9 TB of backup data on a 12 disks ZRAID3 pool.
Now on the track for SSD/NMVe to do the job.
I've just made the same benchmark on a new cluster I've setup.
Three servers with Proxmox 6.3-4.
Each server has two E5-2697 v3, 384GB of RAM, eight 3.8TB Hynix SSD as OSD (same as previous bench but twice the size).
Ceph LAN is meshed 25 Gbps infiniband in broadcast mode (see here...
Funny one.
I renamed the 202 folder on the NFS storage to 020. Just to try if it would do anything.
It fixed the issue, the VM can not be migrated.
Looks like we have to manually remove any left folder on old storage and migration task does not rely on the "qm config" on the VM but parses...
Hello.
I have this issue with two VM (maybe more).
They were migrated from a vSphere setup.
I first moved them (while in vSphere) them to a shared NFS path (shared by vSphere and Proxmox).
Then stopped them in vSphere, created the "same" VM in Proxmox (using vmdk format for hard drive).
Then...
Encryption is client side, you're right.
I was thinking of the fingerprint that is linked to the PBS (proxy.pem and proxy.key).
Copying the whole /etc/proxmox-backup folder and keeping IP/name seems the way to go.
Once posted, I also thought about the PBS key (encryption).
Is there a CLI tool/script to export everything needed to move/rebuild PBS to another VM/server?
Or maybe the easiest way would be to backup/clone PBS (with clonezilla or why not VeeamAgent or another solution) and restore it to the...
Hello all.
My PBS is currently a VM (running on my Ceph Proxmox cluster) and backups happen on a FreeNAS NFS share.
It's not fast (at all) and the more backup I have, the more garbage collector takes time (soon 20 hours).
My FreeNAS boots on a pair of RAID1 SSD, I'm contemplating the idea of...
I actually name the file .ova on download but I could have used any other extension 8-)
But any name/extension I decide to use, I'll end up with a tar archive.
Shouldn't the wiki be upddated to reflect that (and that you have to untar it before use)?
Hello.
I tried a V2V import based on the wiki steps.
https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#CLI
I installed vmware's ovftool on a node and downloaded a VM from a vSphere Infrastructure (it works, you can connect to the vCenter and download through it, no need to...
Most of it is is post #3.
Longer version below:
Jan 26 01:03:41 pbs proxmox-backup-proxy[6187]: starting new backup on datastore 'BKP_PBS_NCP': "vm/131/2021-01-26T00:03:41Z"
Jan 26 01:03:41 pbs proxmox-backup-proxy[6187]: download 'index.json.blob' from previous backup.
Jan 26 01:03:41 pbs...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.