Proxmox Backup Server 1.0 (stable)

I'm interested in ZFS encryption support (but 2 lazy 2 check.)
What is the status of encrypted backups?

it does not use ZFS encryption.
 
I'm interested in ZFS encryption support (but 2 lazy 2 check.)
What is the status of encrypted backups?
I'm running my test system on an encrypted zfs dataset, no issues so far, the PBS system just sees it as a dedicated directory. Of course I need to manually unlock the dataset whenever the server is rebooted, but that's a feature, and how often should a backup server need to be rebooted, anyway?
 
Do you plan any integration with Synology or QNAP devices? That would be a bull's-eye and a big nail in your competitors' coffin.
 
I have 1 proxmox server running an old version (Virtual Environment 5.4-15), is it possible at all to have this working with PBS?
No, I'm afraid - Proxmox VE 5.4 is EOL sine July and it's software stack is too old to just add Proxmox Backup Server without major changes all over the place. At that point, we deemed that it's less risky and more efficient to upgrade to 6.x, which needs to be done anyway due to EOL of the 5.x release.

So, please upgrade to Proxmox VE 6.x first: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0
 
No, I'm afraid - Proxmox VE 5.4 is EOL sine July and it's software stack is too old to just add Proxmox Backup Server without major changes all over the place. At that point, we deemed that it's less risky and more efficient to upgrade to 6.x, which needs to be done anyway due to EOL of the 5.x release.

So, please upgrade to Proxmox VE 6.x first: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0
Ok, thanks for the info.
 
I was really excited, then I saw the community annual price .. ack.. o_O Been a paid supporter/customer for a little while now. Just cannot justify paying for 5x licenses worth of Proxmox for 1 home lab. Community price seem skewed to anyone else? Especially for annual subscription.. if that was perpetual price.. ok, can see that.. $500+

Way to much for my home lab. :rolleyes:
 
Last edited:
I read the docs and still am a bit confused of how deduplication works:

1. On the Summary page of my Datastore it says: "Deduplication Factor 5.54". What exactly does that mean?
2. Is deduplication done in the scope of machines (host,CT,VM) or in the scope of the datastore? I.E. if i have 3 almost identicial VMs, will the deduplication for the backup happen so that only the incremental backups of each VM is considered or will all 3 VMs be taken into account?

Thanks in advance!
 
1. On the Summary page of my Datastore it says: "Deduplication Factor 5.54". What exactly does that mean?
it means that on average, each chunk is used 5.54 times (total referenced chunks/actual chunk count)

2. Is deduplication done in the scope of machines (host,CT,VM) or in the scope of the datastore? I.E. if i have 3 almost identicial VMs, will the deduplication for the backup happen so that only the incremental backups of each VM is considered or will all 3 VMs be taken into account?
deduplication is happening on datastore level (all backup snapshots in a datastore reference the same chunk store)
 
We're currently running a 8 node cluster - 3 nodes are dedicated to CEPH. All our QCOW2 files reside on the CEPH cluster and we do a vma.zst backup to an NFS share. Our network is all 10GBIT and we're seeing only 40MB/S backups which is becoming a big issue as our client base grows on this cluster. Interested in switching our NFS server to PBS but would only make sense for us if QEMU is no longer managed in 64k blocks (I believe this is our bottle neck right now). Can you advise?
 
@tangerine , please open a new thread for your question.
 
Hello
To have a real "Veeam" killer ;-)
Will it be possible to restore only one file from a backup from GUI ?
Regards
Pierre-Yves
Yes, that is on the roadmap and actively worked on (and works already for file-based backups, e.g., Container backups).
 
  • Like
Reactions: Pierre-Yves
diff restore in the works?

Ceph direct dump? from pool to backup host with out adding io load to vm hosts?
 
Is there any way that we will see dirty bitmap for containers, could be implemented via snapshot diff for zfs and ceph rbd.

It's a bummer that vm's backup so fast and containers take ages. I had to migrate back my containers to vm's due to it.
 
Is there any way that we will see dirty bitmap for containers, could be implemented via snapshot diff for zfs and ceph rbd.
I'd not hold my breath for it.
We evaluated that one very closely and the diffs we get from ZFS do not work for the sliding window algorithm in use (which is only semi-correlated to files), it would effectively mean that combination of deduplication and full-backups would need to be abandoned, which we do not want - backups where one needs to apply multiple diffs and pray that that works out is something we want to avoid at all cost.
Ceph rbd would also only help for fixed size block based backups, as there's no function which can map block diffs to a dynamic block sized archives. There was lots of technical discussion, on lists and also internally - if there's a break through we will also certainly go for it; but it currently does not seem like that would be anytime soon.
 
Last edited:
  • Like
Reactions: H4R0

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!