Proxmox Backup Server (beta)

Yes, as of now all relevant packages are rolled out to all repositories, so initial addition and usage works.

Integration will still be improved and as the other poster said: it's still a beta :)
 
  • Like
Reactions: saphirblanc
not deeply tested and not as comfortable as having a shell to copy the files - but checkout the description @t.lamprecht sent to the pve-user list: https://lists.proxmox.com/pipermail/pve-user/2020-July/171883.html - I hope this helps!

I couldn't make this work. Accessing the nbd freezes the shell. Here is what i did:

export PBS_REPOSITORY='root@pam@192.168.50.2:stor1'
export PBS_PASSWORD='my.password'
export PBS_FINGERPRINT='AA: (...) :18'
root@prx002:~# proxmox-backup-client list
root@prx002:~# modprobe nbd
root@prx002:~# qemu-nbd --connect=/dev/nbd0 -f raw -r pbs:repository=$PBS_REPOSITORY,snapshot=vm/111/2020-07-21T22:30:02Z,archive=drive-scsi0.img.fidx
root@prx002:~# lsblk /dev/nbd0
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nbd0 43:0 0 300G 1 disk
root@prx002:~# mkdir /mnt/test
root@prx002:~# mount /dev/nbd0 /mnt/test/


After the mount, the shell freezes. syslog shows errors. had to reboot the server.

Jul 22 11:45:45 prx002 kernel: [429785.143486] block nbd0: Possible stuck request 00000000afd766fe: control (read@0,512B). Runtime 30 seconds
Jul 22 11:45:45 prx002 kernel: [429785.143490] block nbd0: Possible stuck request 0000000067764605: control (read@512,512B). Runtime 30 seconds
Jul 22 11:45:45 prx002 kernel: [429785.143492] block nbd0: Possible stuck request 000000004da4d20e: control (read@1024,512B). Runtime 30 seconds
Jul 22 11:45:45 prx002 kernel: [429785.143493] block nbd0: Possible stuck request 000000005be31650: control (read@1536,512B). Runtime 30 seconds
Jul 22 11:45:45 prx002 kernel: [429785.143495] block nbd0: Possible stuck request 00000000b26780b0: control (read@2048,512B). Runtime 30 seconds
Jul 22 11:45:45 prx002 kernel: [429785.143496] block nbd0: Possible stuck request 0000000040ab5c23: control (read@2560,512B). Runtime 30 seconds
Jul 22 11:45:45 prx002 kernel: [429785.143497] block nbd0: Possible stuck request 0000000020d96be6: control (read@3072,512B). Runtime 30 seconds
Jul 22 11:45:45 prx002 kernel: [429785.143499] block nbd0: Possible stuck request 00000000de74d789: control (read@3584,512B). Runtime 30 seconds
(...))
Jul 22 11:51:23 prx002 kernel: [430123.059725] block nbd0: Possible stuck request 00000000de74d789: control (read@3584,512B). Runtime 360 seconds
Jul 22 11:51:54 prx002 kernel: [430153.779397] block nbd0: Possible stuck request 00000000afd766fe: control (read@0,512B). Runtime 390 seconds
Jul 22 11:51:54 prx002 kernel: [430153.779404] block nbd0: Possible stuck request 0000000067764605: control (read@512,512B). Runtime 390 seconds
Jul 22 11:51:54 prx002 kernel: [430153.779407] block nbd0: Possible stuck request 000000004da4d20e: control (read@1024,512B). Runtime 390 seconds
Jul 22 11:51:54 prx002 kernel: [430153.779410] block nbd0: Possible stuck request 000000005be31650: control (read@1536,512B). Runtime 390 seconds
Jul 22 11:51:54 prx002 kernel: [430153.779412] block nbd0: Possible stuck request 00000000b26780b0: control (read@2048,512B). Runtime 390 seconds
Jul 22 11:51:54 prx002 kernel: [430153.779415] block nbd0: Possible stuck request 0000000040ab5c23: control (read@2560,512B). Runtime 390 seconds
Jul 22 11:51:54 prx002 kernel: [430153.779418] block nbd0: Possible stuck request 0000000020d96be6: control (read@3072,512B). Runtime 390 seconds
Jul 22 11:51:54 prx002 kernel: [430153.779420] block nbd0: Possible stuck request 00000000de74d789: control (read@3584,512B). Runtime 390 seconds


Any hints on where to start debugging this?
 
is installed the backup server for testing - the web interface is not showing up on https://<IP>:8007 - what went wrong what can I do to check?
 
is installed the backup server for testing - the web interface is not showing up on https://<IP>:8007 - what went wrong what can I do to check?
* are the services running? systemctl status proxmox-backup.service proxmox-backup-proxy.service?
* is something listening on port 8007? ss -tlnp |grep 8007?
 
services are running, port is listening, I may have to check network configuration, it seems that IP V4 has a problem. I cannot access the server via ssh. interfaces lists only static IP but no broadcast, setting this will result in fails, rest of ip a shows only IP V6 addresses
 
hello i am trying to restore a linux container previously backuped to pbs. it is restoring terribly slow at 4 megabytes per second. the backup machine running pbs has a 10gbe internet connection. the client has a 1gbe internet connection.

yesterday i backuped around 100 containers simultaneously from our 35 nodes cluster, it was really fast. but restoring is really slow.
 
Can also be an issue from the storage you restore too... 1GBe should give you around 118 MiB/s of the theoretical maximum of 125MiB/s of real throughput, that's your upper limit.

Using the benchmark should give you an idea of the theoretical line speed possible:
https://forum.proxmox.com/threads/how-fast-is-your-backup-datastore-benchmark-tool.72750/

Besides the line speed the backup server read and the client write performance are other next bottlenecks.
 
new versions available on pvetest/pve-no-subscription and pbstest, thanks for all the feedback and bug reports so far!

highlights include:
- various sync fixes
- better encryption support
- console + host update support

http://download.proxmox.com/debian/...est/binary-amd64/qemu-server_6.2-11.changelog
http://download.proxmox.com/debian/...d64/libproxmox-backup-qemu0_0.6.2-1.changelog
http://download.proxmox.com/debian/...amd64/proxmox-backup-client_0.8.9-1.changelog
http://download.proxmox.com/debian/...amd64/proxmox-backup-server_0.8.9-1.changelog

see https://bugzilla.proxmox.com/buglist.cgi?component=pbs for a list of known and/or resolved issues!
 
Not sure if this feature was mentioned yet;
manual seeding of backups
Would be very beneficial to those with large datasets that need to sync offsite.
Sometimes it's faster to transport the data instead of sending it over a WAN.

Another use case to keep in mind/consider might be rotation of backup media. Similar to tape, I know there are individuals that backup to disk (or even disk) media, remove the media to store offsite, and swap in different media. Not something I do, but thought I'd mention it.
 
Does this work as a VM? Running my current backup servers as VMs on XenServer with iSCSI backbone. Would like to keep the same infrastructure.
 
Does this work as a VM? Running my current backup servers as VMs on XenServer with iSCSI backbone. Would like to keep the same infrastructure.

yes it does. performance will probably be better when running bare-metal, but it is not required.
 
I do agree that will be very usefull, but from security perspective this is not OK. From my knowldge, a good bukup system(security perspective) must complay with this rules:

-any backup task is intitiated only from backup host(so in case of security compromise of the client, the client himself can not access/delete/restore any bakup file/image)
-any restore task is also initiated only from backup host

Good luck /Bafta!
I agree with guletz, that a pure pull backup with all connections not initiated by the pve (the system to backup) is best from the security point of view. I would like to know a best practice way to use pbs securely in this spirit. For testing, I managed to set up a pbs in my lan and configured a backup storage on the pve (1 node cluster) in my lan. It is then easy to create backups of my pve-containers, but they are pushed. This is not a big problem on my lan. But I also like to backup a remote server. In the worst case, Eve will not only corrupt my remote server but might also obtain access to my lan, if she manages to hack my remote server. Is it possible to have the backup-client on my lan to initiate the backup (perhaps on a third machine)? If this is not the case, should I setup a lan-pve that pulls the remote containers via pve-zsync. I could then push the backups from the lan-pve to the lan-pbs.
 
I agree with guletz, that a pure pull backup with all connections not initiated by the pve (the system to backup) is best from the security point of view. I would like to know a best practice way to use pbs securely in this spirit. For testing, I managed to set up a pbs in my lan and configured a backup storage on the pve (1 node cluster) in my lan. It is then easy to create backups of my pve-containers, but they are pushed. This is not a big problem on my lan. But I also like to backup a remote server. In the worst case, Eve will not only corrupt my remote server but might also obtain access to my lan, if she manages to hack my remote server. Is it possible to have the backup-client on my lan to initiate the backup (perhaps on a third machine)? If this is not the case, should I setup a lan-pve that pulls the remote containers via pve-zsync. I could then push the backups from the lan-pve to the lan-pbs.
Create a local backup server and than sync from a remote server, and all is ok... and secure...
 
Hi all. This might be a silly question, but I've been really enjoying using Proxmox and Proxmox Backup Server in my homelab for the last few days. However, one thing I can't seem to find an easy way to do is to backup the actual Proxmox host configuration itself (e.g. networking/firewall rules/local storage/user accounts/plugins/scripts (e.g. UPS control)). In practical terms, everything except the VMs/containers/templates which would need to be restored in the event of a catastrophic failure of a host, or if the host needed to be reinstalled for any reason.

While searching I found this forum thread and this wiki article outlining the directories which would need to be backed up (plus any custom configuration which falls outside those locations), but the procedure of doing the actual backup seems like it has to be done manually on the host, and the user has to manually export the configuration off the host (or to automate it using features of the underlying OS, which Proxmox/Proxmox Backup Server wouldn't be aware of).

Are there any plans to integrate an automated method for backing up the actual Proxmox host configurations to Proxmox Backup Server? I tried to search to see if anyone else had asked the same question, but I didn't get anywhere; sorry if this has been asked/answered before.

Thanks!
 
Are there any plans to integrate an automated method for backing up the actual Proxmox host configurations to Proxmox Backup Server? I tried to search to see if anyone else had asked the same question, but I didn't get anywhere; sorry if this has been asked/answered before.
yes it is on the roadmap: https://pbs.proxmox.com/wiki/index.php/Roadmap
 
  • Like
Reactions: amp88
performance will probably be better when running bare-metal
Not so sure, I've been testing PBS on two laptops with VirtualBox installed. On each laptop, I had only one VM - one for Debian client, one for PBS. Both laptops were connected to a small low-end gigabit switch and the transfer between those to machines while backing up the client oscillate around 95 MB/s - 6 GB machine was backed up in less than 2 minutes including deduplication (the final size of the backup file was something around 1,3 GB).
So the virtualization not necessarily should be the bottleneck ;)
 
-any backup task is intitiated only from backup host(so in case of security compromise of the client, the client himself can not access/delete/restore any bakup file/image)
-any restore task is also initiated only from backup host

This depends on the trust model, and is complete nonsense for our use case ...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!