Proxmox Backup Server (beta)

I assume a subscription for each Proxmox Backup Server. License is aGPL v3.



subscription pricing will be available with the first stable release and yes, a very interesting question.

Just looking to clarify. I assume that this means that a free verison won't be available for use in our homelabs?
 
can block based backups be mounted through fuse or nbd like filebased archives?
# pxar mount archive.pxar /mnt

We made a QEMU block-driver implementation for the backup server to access backed up images read only. One should be able to use that and NBD (there's no BUSE (block device in user space) yet) and access the data. But this was done pretty recently, a deeper integration into PVE has still to be done - but QEMU in version 5.0.0.-9 should already have it included.
 
  • Like
Reactions: flames and Moayad
  • Like
Reactions: flames
Just looking to clarify. I assume that this means that a free verison won't be available for use in our homelabs?

A free, full featured, no strings attached version will always be available.

The project is licensed under AGPLv3, just like the Proxmox VE and the Mailgateway, and that will stay that way. What Proxmox, the company, will offer is support subscriptions and access to a future enterprise repository (which will not have any extra "paid only" feature, everything our product can possibly offer is available to all users. But it will be stabler, as it's longer and more extensively tested than our already well tested no-subscription repository).
 
@t.lamprecht , does it make any difference to use compression on jobs created on PVE when PBS is the target? Am I right to assume that enabling it would increase load on both client and server while lowering network bw usage?
 
@t.lamprecht , does it make any difference to use compression on jobs created on PVE when PBS is the target?

For now the vzdump compression parameter has no effect if Proxmox Backup Server is the target, it's always compressed using zstd. But, disabling compression would be supported from the server, so we may allow to set this. IMHO much of the time it's just better to use it, zstd is efficient and fast after all.

Am I right to assume that enabling it would increase load on both client and server while lowering network bw usage?

It would normally only increase the load on the client, the server know if it is but it does just take the data chunk as is in both cases. With using it, network load and target storage space may be lowered.
 
  • Like
Reactions: Fabricio Ferrari
How do you mean that? The Proxmox Backup Server client doesn't use SSH anywhere. Or do you mean you tunnel the client <-> server connection through an SSH tunnel?
Ok. My mistake. You've right. I thought that you are using ssh, which was dictated by the fingerprint message. I apologize for the stupid suggestion :)
PS
When do you plan using a web UI of PBS to manage the clients? Is it in your roadmap at all? That will be a huge advantage.
 
When do you plan using a web UI of PBS to manage the clients? Is it in your roadmap at all? That will be a huge advantage.

Do you mean a graphical interface for the client? Yes, that's on the wishlist.
 
I mean something like in Avamar or Netbackup - client management tab in PBS UI. It could look like VM's or containers view in PVE.

At least for now (stable), something like this isn't actively planned. A daemon mode could make sense, it would allow a few things (like watching directories for change, configuring a schedule, ...?) so we may think about that - but no promise here yet :)
 
  • Like
Reactions: DerDanilo
Another little thing. It seems to hold a lock after Backup job is completed according to job progress output on Proxmox VE:

[…]/mnt# lsof /mnt/backup COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME proxmox-b 6497 backup 14u REG 0,58 0 269 /mnt/backup/pbs/.lock

After booting this little Intel NUC today, there was no such process. So seems to be have been left over from backup process.

Plugging in the USB disk was seamless:

- Before mounting: "?" on PBS storage in PVE, PBS showed error not finding the chunks. Both fair enough.
- After mounting: PBS storage shown with contents just fine in PVE and PBS.

So except that little lock thing, it seems it can work well with removable storage. Last time I just sent the process a TERM signal. Next time I may just stop the backup server service and see what happens. Of course always just after all manually started backup jobs have completed.

I understand that this is just a use case for lab setups, so not major focus.
 
Found something interesting and I bet there is a good explanation for it:

1) When I re-run a backup shop of running VMs directly after I ran it, then the backup is almost immediate, as I expected.

2) When however the VMs are switched off, it transfers the whole VM image again, obviously checking for differences to backup on the PBS side.

INFO: status: 3% (668.0 MiB of 20.0 GiB), duration 3, read: 222.7 MiB/s, write: 222.7 MiB/s INFO: status: 7% (1.5 GiB of 20.0 GiB), duration 6, read: 290.7 MiB/s, write: 290.7 MiB/s INFO: status: 10% (2.2 GiB of 20.0 GiB), duration 9, read: 233.3 MiB/s, write: 233.3 MiB/s INFO: status: 14% (3.0 GiB of 20.0 GiB), duration 12, read: 264.0 MiB/s, write: 264.0 MiB/s INFO: status: 18% (3.8 GiB of 20.0 GiB), duration 15, read: 284.0 MiB/s, write: 284.0 MiB/s INFO: status: 22% (4.6 GiB of 20.0 GiB), duration 18, read: 261.3 MiB/s, write: 261.3 MiB/s INFO: status: 27% (5.6 GiB of 20.0 GiB), duration 21, read: 345.3 MiB/s, write: 345.3 MiB/s

Ah, and I already learned about the background: That is the dirty bitmap thing in qemu:

https://qemu.readthedocs.io/en/latest/interop/bitmaps.html

However, I am using qcow2 images and there those dirty bitmaps are supposed to be persistent on close?

Supported Image Formats

QEMU supports all documented features below on the qcow2 image format.
However, qcow2 is only strictly necessary for the persistence feature, which writes bitmap data to disk upon close. If persistence is not required for a specific use case, all bitmap features excepting persistence are available for any arbitrary image format.

So wouldn't this work on switched off VMs as well? What am I missing here?
 
Theoretically, you could already copy over the client from a Proxmox VE (or extract the client .deb) as it's a statically linked binary it should run on all modern Linux amd64 based systems. But yes, we'll still try to get a more in-depth integrations in more popular distributions. How this will exactly look like isn't yet 100% clear.
Thomas, this seems not to be the case - I've tested with the .deb from http://download.proxmox.com/debian/pbs/dists/buster/pbstest/binary-amd64/ and its dynamic linked.

Do you have any ETA for the static binary?
 
A free, full featured, no strings attached version will always be available.

The project is licensed under AGPLv3, just like the Proxmox VE and the Mailgateway, and that will stay that way. What Proxmox, the company, will offer is support subscriptions and access to a future enterprise repository (which has not any feature more, just updated more slowly to ensure an even better testing than our already tested no-subscription repository).

This is fantastic! Thank you to the whole team at Proxmox! You guys are rock stars!
 
Thomas, this seems not to be the case - I've tested with the .deb from http://download.proxmox.com/debian/pbs/dists/buster/pbstest/binary-amd64/ and its dynamic linked.

So I may have promised a bit to much and to early here. While yes, the whole rust code is statically linked and needs no extra dependency, the bindings to certain libraries like zstd, openssl, libfuse3 are still dynamically linked.

For now, we found three extra libraries the client depends on even if none of their symbols are in use (libsystemd, libudev, libpam) - they are only in use in the server. It seems the compiler gets confused here, we'll see if we can work around this so that those will be dropped.
The remaining libraries should be available everywhere, even on non-systemd distros. I'll try to get some time to experiment building a fully static binary with musl. Sorry for the confusion here, not sure how I could forget the bindings we used.
 
  • Like
Reactions: Boni and robhost
Thanks, you help is much appreciated. We failed to build the proxmox-backup from source (error[E0433]: failed to resolve: use of undeclared type or module ), so the static binary is our last hope for testing PSB with our servers running all CentOS ;-)
 
Thanks, you help is much appreciated. We failed to build the proxmox-backup from source (error[E0433]: failed to resolve: use of undeclared type or module )

You need to adapt the Cargo.toml a bit if you cannot install the build dependencies from our development package repo.
https://git.proxmox.com/?p=proxmox-...7;hb=37e53b4c072f207f17496bd225a462c79ee059e0

Orient yourself on the commented out "#proxmox { git =..." line, for the url use "git://git.proxmox.com/git/proxmox.git" (and respective) instead of version you can also use "commit = sha-id".
See the "rust" section in https://git.proxmox.com/?o=age for available repositories, you'd at least need proxmox, pxar and pathpatterns.

so the static binary is our last hope for testing PSB with our servers running all CentOS ;-)

Hmm, but CentOS should have the following libraries available, so the binary should still work - at least if CentOS doesn't ship other so-versions of the libraries.
Code:
# readelf -d /usr/bin/proxmox-backup-client
  Tag        Type                         Name/Value
0x0000000000000001 (NEEDED)             Shared library: [libacl.so.1]
0x0000000000000001 (NEEDED)             Shared library: [libsystemd.so.0]
0x0000000000000001 (NEEDED)             Shared library: [librt.so.1]
0x0000000000000001 (NEEDED)             Shared library: [libcrypt.so.1]
0x0000000000000001 (NEEDED)             Shared library: [libpam.so.0]
0x0000000000000001 (NEEDED)             Shared library: [libzstd.so.1]
0x0000000000000001 (NEEDED)             Shared library: [libudev.so.1]
0x0000000000000001 (NEEDED)             Shared library: [libfuse3.so.3]
0x0000000000000001 (NEEDED)             Shared library: [libssl.so.1.1]
0x0000000000000001 (NEEDED)             Shared library: [libcrypto.so.1.1]
0x0000000000000001 (NEEDED)             Shared library: [libuuid.so.1]
0x0000000000000001 (NEEDED)             Shared library: [libdl.so.2]
0x0000000000000001 (NEEDED)             Shared library: [libpthread.so.0]
0x0000000000000001 (NEEDED)             Shared library: [libgcc_s.so.1]
0x0000000000000001 (NEEDED)             Shared library: [libc.so.6]
0x0000000000000001 (NEEDED)             Shared library: [ld-linux-x86-64.so.2]
0x0000000000000001 (NEEDED)             Shared library: [libm.so.6]
 
  • Like
Reactions: flames

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!