It's a matter of taste to do the pruning on the backup server or limit backups at PVE backup time, i prefer to prune on the backup server. For the sync job, there is a check box on the sync job, "Remove vanished" that indeed deletes if backups are removed at the source datastore. However, even...
well, i still don't think it works using the GUI since i tried it. Your howto resource also points that out: https://lists.proxmox.com/pipermail/pve-user/2021-April/172556.html 3
If it's a VM, there is a file restore in PVE, select the VM, select backups, select the backup and use the file restore button on the top. Unless the disk is LVM because i think the file restore doesn't work if LVM disk
It sure is, just do one map for each disk, they will become virtio0 -> loop0, virtio1 -> loop1 etc. LVM is a little extra trouble, you can do a vgscan to see them after mapping them, and most likely you need a vgchange to activate the LVMs before you can mount them
I would appreciate suggestions how the access rights works, normally most backups for one datastore from one PVE cluster is done by one system backup user. This works great. Some other users in PBS also have backup rights to the same datastore. This works fine until one user tries to backup a VM...
You are almost there then ;)
You should absolutely backup both jobs 6 & 18 on site 1 to the same datastore.
The dirty bitmap works only if you backup the same VM to the same datastore.
On site 2 where you pull the remote sync, you should do that to a created datastore, not the one you backup...
I don't follow, for me incremental is the perfect word here, how is it confusing you? Incremental means for me the changes between now and the previous backuped data. It's easy to understand and also true. What bothers you here?
You certainly should be able to install parted in a fresh debian, do you really mean that "apt update" and "apt install parted" is not installing it for you? What does "apt search ^parted" show?
I am really a fan of mirrored vdevs ever since they could be removed. I would probably do 10 mirror vdevs. Else 2 vdevs with raidz2 or 3. A matter of taste.
NFS works, it's just a bit slow. I use it but had some problems with rights. This is not the right thread though since the OP has cifs problems. Just wanted to let you know it can work. I use a synology NAS and i had to use NFS settings "squash: Map root to admin"
It might be this one where the root disk got filled because the logging got confused of nested datastores
https://forum.proxmox.com/threads/root-disk-filling-up.84215/#post-370415
if you have a raidz2 you have protection for two failed disks, special devices as mirrored vdev is cool, but to keep the same level of protection you could do a three disk mirror.
Hello, you can for example use the proxmox-backup-client from a PVE host as discussed here: https://forum.proxmox.com/threads/cephfs-content-backed-up-in-proxmox-backup-server.84681/
example: proxmox-backup-client backup cephfs.pxar:/mnt/pve/cephfs --repository xxx
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.