Proxmox Backup Server (beta)

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
15,536
919
163
It would be nice if the base functions of your GUI are the same at any Product.

Yes, we share as many functions as needed. As PBS is using Rust, most have to be rewritten (and shared again to our other products). You will see fast progress here.

CephFS is not the perfect backup storage if you need full speed. Besides that, you can already just mount your CephFS on the PBS host and add this directory as PBS datastore with a few clicks and you will see the speed impact on doing backups, garbage collections and deduplication if you your backup store grows. We would love to get some real live experience here, especially if your datastore is growing (e.g. bigger 100 TB).

I don't know where you see your product, but I see it as a Veeam alternative, and Veeam can also backup to external storages.

Veeam cannot do QEMU dirty bitmaps, so its not perfect for KVM. The PBS "Remotes" are much more powerful than just simple external storage. You can encrypt and send your backup data to a remote PBS, all this is integrated and copies only delta using secure transport layer. The PBS architecture is extremely flexible and you can use basically all storage you can mount - best practice: local ZFS (and several remotes if you need it for multiple copies on different sites/datacenters).

Besides that, I would never store my most important data via a closed source commercial backup software which is faced with the US laws, but this is another story but non US users should really think about this.
 
  • Like
Reactions: ebiss and flames

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,281
1,562
164
South Tyrol/Italy
shop.proxmox.com
Question: Especially interesting for the SME market. Are you going to provide Google Drives or MS OneDrive as a destination?

No, there's no native integration planned. This folder syncing agents do not have an open interface, and are normally rather slow in syncing. But that said, if you really would want this then you could just point a Proxmox Backup Server datastore at such a service's "mount point" directory.
 

sb-jw

Active Member
Jan 23, 2018
587
63
33
31
CephFS is not the perfect backup storage if you need full speed.
We have 20 VMs with around 490GB stored on CEPH, if we calculate the disk size itself, we should have over 1TB. Currently we run a 5 Node HCI Cluster with 30 SSDs. We run parallel Backups from all Nodes to ONE single CEPH HDD Node (mixed SATA / SAS drives with 1 - 3TB) with 2x 1GbE Network. Only one Node need 43 Minutes, because they need to backup a VM with a 350GB and 212GB Disk, all other Nodes need around 23 Minutes to complete the Job. IMHO this is really fast and good, so from my point of view, there are no downsides of using CephFS.
I don't think that local storage is so much faster here and the provide the same effects and advantages as the CEPH Storage does.

Besides that, you can already just mount your CephFS on the PBS host and add this directory as PBS datastore with a few clicks
Sure, i know, but i don't want to use some "work arounds", its still better if you support it native. We try to use the software as it was intended to avoid problems with upgrades.

The PBS "Remotes" are much more powerful than just simple external storage. You can encrypt and send your backup data to a remote PBS, all this is integrated and copies only delta using secure transport layer. The PBS architecture is extremely flexible and you can use basically all storage you can mount - best practice: local ZFS (and several remotes if you need it for multiple copies on different sites/datacenters).
I dont said PBS is only a external storage, but Veeam itself isnt only external storage too - both can work as a director only. AFAIK all of this is Supported by Veeam too. If you setup an SMB or NFS Gateway you can mount any storage to Veeam too.
But it was NOT my intention so praise Veeam here, i see PBS as an alternative to Veeam, because it has nearly the same feature set. So its nothing negative, i apologize if this was misleading.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,281
1,562
164
South Tyrol/Italy
shop.proxmox.com
so from my point of view, there are no downsides of using CephFS.

CephFS can be made speedy enough, and for some uses cases, e.g., where not much data needs to be backed up, or the times needing to do so doesn't matter that much, one doesn't even have to throw that good of an HW onto it.
IMO, the bigger concern isn't speed with cephFS or RBD as backup target, it's the question: what happens if my cluster fails, partially or completely? Do I really want to have first fixing up ceph before I can access backups? Reducing the amount of components which are part of the critical chain to access backups is really important, it may not matter for the day to day "bread and butter" backup access - but when "shit is really hitting the fan" this can not only be a nuisance, it can mean you just lost all data and may run for the latest offsite backup you made. IMO this isn't the most ideal for a backup storage (not saying that it has to be bad, just not the best possible).
A thing which one has to look out for when using a network backed storage on PBS is, that you send your data once to the Proxmox Backup Server and then the server sends it again over the network to the "real storage" - while with incremental/deduplicated backups the general impact is reduced it still means that you're effectively double your total backup related network impact. That can be OK, but it needs to be accounted for.

to ONE single CEPH HDD Node (mixed SATA / SAS drives with 1 - 3TB) with 2x 1GbE Network

A single ceph node means here that you have a ceph server setup consisting of only one node? If that's the case I really want to press making offsite backups if not already done to reduce the risk of such a big single point of failure at least a bit.

In general, we'll surely look into if we can improve the process to attach some network attached storage technologies like CIFS, NFS, CephFS into Proxmox Backup Server. But, this is for now a bit of a lower priority, it can be worked around, and we'd like to address other issues and still missing features first.

Thank you for your input!
 
  • Like
Reactions: flames

sb-jw

Active Member
Jan 23, 2018
587
63
33
31
IMO, the bigger concern isn't speed with cephFS or RBD as backup target, it's the question: what happens if my cluster fails, partially or completely? Do I really want to have first fixing up ceph before I can access backups?
If my productive CEPH Cluster fails, i have my Backup CEPH Cluster and vice versa. Our concept is, running two independent CEPH Storages.
But where is the difference between a single node CEPH Storage and a single node PBS?

Reducing the amount of components which are part of the critical chain to access backups is really important, it may not matter for the day to day "bread and butter" backup access - but when "shit is really hitting the fan" this can not only be a nuisance, it can mean you just lost all data and may run for the latest offsite backup you made.
Absolutely, and this is the reason why i use CEPH. I dont have a problem to scale my storage up, down, vertical or horizontal, i can scale up to multiple TBs of data. CEPH itself care about the consistence of the Data, i don't have problems with fragementation (like ZFS), i dont have problems with HW Controllers or something other. I'm able to recover a whole CEPH Storage from the OSDs itself (e.g. if i lost all mons).

A single ceph node means here that you have a ceph server setup consisting of only one node? If that's the case I really want to press making offsite backups if not already done to reduce the risk of such a big single point of failure at least a bit.
Yes, one single Backup CEPH Node (not productive!) with a replica of 2 based on OSDs (not hosts!). I don't really see any Problem here, because its not really different to a normal Server with local storage and an ZFS, SW or HW Raid. But if a disk failed, my CEPH will heal itself directly, i do not need to change a disk immediately to get a healthy state again.
And, if i need more Space, i add a second Server and scale up, i don't need to change the Backup Path or something else, the target is still the same, but now with more Storage.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,281
1,562
164
South Tyrol/Italy
shop.proxmox.com
But where is the difference between a single node CEPH Storage and a single node PBS?

That wasn't my point, the point was that combining both is brittle for times when accessing backups matters, especially for the normal use case of ceph in a clustered, maybe even hyper-converged multi node setup. Your single node setup could be improved by running PBS on the same server as ceph, so that network impact is reduced.

dont have problems with HW Controllers or something other
Ceph doesn't like using HW RAID controller the same way as ZFS, i.e., HBA mode OK, all else is not so ideal.

I'm able to recover a whole CEPH Storage from the OSDs itself (e.g. if i lost all mons).

Yeah, well, ZFS doesn't need two to three extra service (monitor, manager, metadata) daemon to provide an FS so not sure how that comparison works. And I do not even mean that this difference is bad, it's just that the use case for one is being a feature-full, scalable, (possibly) redundant single-host filesystem/storage and the other is for a feature-full, scalable, (possibly) redundant clustered filesystem/storage. This simply shows, and while one can use it for the other use case by cutting some edges, it just isn't as excellent for that in general (while it, as you argued correctly, can still excel in some points there).

Note also that for the way our CAS deduplicated data store approach stores the chunks, which additionally also normally does not sees any snapshotting on that dataset anyway, fragmenetation becomes rather a non issue, for example checking one of our PBS test setups (7 x 16TB RAID-Z2 pool):
Bash:
zpool list
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
tank   102T  64.5T  37.6T        -         -     0%    63%  1.00x    ONLINE  -
(note, this is after ten to a hundred thousand backups made, with quite some pruned again and running garbage collection)

Thanks for showing that ceph is not only capable to be spread over 100dres of nodes, but that it can work just with one too.
I'm glad Ceph single-node setup works well for you, but we still will recommend the less complex ZFS for use as backing datastores in the Proxmox Backup Server.

Now, the point about using network attached storage for it was made, and it was also already said that it will be evaluated, I think at the current beta stage there's nothing more to say here.
Let's please focus back on the Proxmox Backup Server, avoiding semi-related storage technology comparisons. :)
 
Last edited:

JMM

New Member
Aug 28, 2019
9
0
1
48
Hi, great job, congrats!

Already making some tests, but I'm getting an error.
CTs will backup just fine, but VMs will give:

ERROR: Backup of VM 102 failed - VM 102 qmp command 'backup' failed - backup register image failed: command error: HTTP Error 404 Not Found: Path not found.

Am I missing some config?
 

Sralityhe

Active Member
Jul 5, 2017
78
3
28
28
Hi,

id like to request a feature: it would be nice if we could restore into a new vm instead of overwriting the old one, sometimes we just need partial backup data and that way the can copy them.

thanks!
 

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
15,536
919
163
Hi,

id like to request a feature: it would be nice if we could restore into a new vm instead of overwriting the old one, sometimes we just need partial backup data and that way the can copy them.

thanks!

That is already possible, like we had it before with vzdump. Just browse your backup datastore and do the restore to your new VMID/CTID.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,281
1,562
164
South Tyrol/Italy
shop.proxmox.com
Hi, great job, congrats!

Already making some tests, but I'm getting an error.
CTs will backup just fine, but VMs will give:

ERROR: Backup of VM 102 failed - VM 102 qmp command 'backup' failed - backup register image failed: command error: HTTP Error 404 Not Found: Path not found.

Am I missing some config?

Can you please open a new thread posting the following information:
Bash:
pveversion -v

pvesm status

proxmox-backup-client version --repository <user>@<realm>@<host>:<datastore>
# for examplem I'd use:
proxmox-backup-client version --repository tlamprecht@pbs@192.168.30.42:tlamprecht

Thank you.
 

Veeh

Active Member
Jul 2, 2017
49
6
28
36
This is awesome ! Thanks for your great work.
I'll test this ASAP.

Cheers
 

ntimo

Member
Jun 20, 2020
21
1
8
It would be awesome if it would be possible to manage the PBS from the same datacenter view like the pve servers. So you only have one management ui :)
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,281
1,562
164
South Tyrol/Italy
shop.proxmox.com
It would be awesome if it would be possible to manage the PBS from the same datacenter view like the pve servers. So you only have one management ui :)

No, this would make things crowded and Proxmox Backup Server was explicitly made such that it can be used stand alone, possible even without Proxmox VE - which is especially nice for offsite remotes and for possible future integrations into other systems. I mean it really doesn't costs much to open a separate browser tab, that's about as much work than it would be navigate a Datacenter PBS panel.. It just allows more flexibility this way.

Note that there's a "datacenter manager" on the roadmap, there it would make sense to be able to add multiple PVE nodes/cluster and multiple PBS nodes in a central view, as this can then be optimized to be a UI for central access to all Proxmox infrastructure.
 
  • Like
Reactions: ntimo

DerDanilo

Renowned Member
Jan 21, 2017
453
108
63
No, this would make things crowded and Proxmox Backup Server was explicitly made such that it can be used stand alone, possible even without Proxmox VE - which is especially nice for offsite remotes and for possible future integrations into other systems. I mean it really doesn't costs much to open a separate browser tab, that's about as much work than it would be navigate a Datacenter PBS panel.. It just allows more flexibility this way.

Note that there's a "datacenter manager" on the roadmap, there it would make sense to be able to add multiple PVE nodes/cluster and multiple PBS nodes in a central view, as this can then be optimized to be a UI for central access to all Proxmox infrastructure.

Once all of that is up and running, it would be really nice to have a Proxmox "Manager" of some sort, that allows to display multiple Proxmox Cluster (PVE, PBS, maybe even PMGs) in a single interface that can be run additionally on a special node. This node then connects to all clusters and allows a single interface management, which would even make talking to an API very simple; single point for all API calls.
Just an idea. :)
 

niekbergboer

New Member
Jul 12, 2020
1
1
3
45
Switzerland
I made an account just to show my appreciation here: It installed the backup server and manager, and added a PBS storage type to my lab setup.
I am very impressed: This was exactly what I needed, because of limited upstream bandwidth.

Thank you very much! Having this integrated into Proxmox VE itself is a huge improvement over the collection of kludges that I built on top of BorgBackup.
 
  • Like
Reactions: t.lamprecht

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,281
1,562
164
South Tyrol/Italy
shop.proxmox.com
Once all of that is up and running, it would be really nice to have a Proxmox "Manager" of some sort, that allows to display multiple Proxmox Cluster (PVE, PBS, maybe even PMGs) in a single interface that can be run additionally on a special node. This node then connects to all clusters and allows a single interface management, which would even make talking to an API very simple; single point for all API calls.
Just an idea. :)

I mean, did you read my note:
Note that there's a "datacenter manager" on the roadmap, there it would make sense to be able to add multiple PVE nodes/cluster and multiple PBS nodes in a central view, as this can then be optimized to be a UI for central access to all Proxmox infrastructure.

The central view I described won't be just a "GUI which connects to multiple APIs" but rather something in the direction you described ;)
 
  • Like
Reactions: DerDanilo

DerDanilo

Renowned Member
Jan 21, 2017
453
108
63
I mean, did you read my note:


The central view I described won't be just a "GUI which connects to multiple APIs" but rather something in the direction you described ;)
Awesome. I think i read that but must have forgotten to properly build the connection in my head. :) Looking forward to this!

Would also be nice if PMGs would have a single tracking center for all nodes etc..
 

ntimo

Member
Jun 20, 2020
21
1
8
Do you already have a estimate on when the options for backup encryption will be available in the pve web interface? :)
 

patefoniq

Member
Jan 7, 2019
44
10
13
Łódź
syslink.pl
Note that there's a "datacenter manager" on the roadmap, there it would make sense to be able to add multiple PVE nodes/cluster and multiple PBS nodes in a central view, as this can then be optimized to be a UI for central access to all Proxmox infrastructure.
Is "Datacenter Manager" will include management of pmg also? :)
 
  • Like
Reactions: DerDanilo

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!