Tuxis launches free Proxmox Backup Server BETA service

so something was wrong locally on the PBS and at this time it was idle...right ?
 
Oh nice!
What is the pricepoint you are aiming at?
And do you offer the service for individuals aswell?
 
so something was wrong locally on the PBS and at this time it was idle...right ?

Hello @TwiX and @tuxis ,

Just trying to understand the setup when the error occurred with the cluster going wonky when PBS had issues. @TwiX, do you have your own local PBS installed and you sync the one you have at Tuxis to your PBS? Or do you only have the one PBS at the Tuxis site.

My setup would be to have my own local PBS connected to each of my Proxmox Nodes. I would then look at an offsite PBS (ie: Tuxis) and sync the Tuxis PBS to my PBS. My Nodes don't necessarily need to have the Tuxis PBS connected to their storage. I'm wondering if the error TwiX encountered would still occur if the Proxmox Nodes didn't have the remote (Twix) PBS connected.

Thank you.
 
Hi,

We only use the Tuxis PBS for now (for testing purpose only).

The Tuxis PBS is crashed for me since today 14:58 GMT +2
 
I just tried the Tuxis PBS and ran into kinda the same issue. pvestatd "stops working" until a service restart.
Also I had to kill all proxmox-backup-client processes to get by PVE instance work like it should again, because the Backup Job could not be stopped by the GUI.

No VM was crashing or having other issues tho - it was just the UI getting messed up when having the PBS storage connected.
The PVE is located in a datacenter with 1 GBit/s uplink. Transmission speed was ~10-15MB/s (MB/s - not Mbit/s!).
The first 3 little LXCs run through the backup successfully, tho. :)

I'll setup an own PBS instance later to test if that works "better" in terms of connectivity.
Then I'll checkout if I can use Tuxis' PBS as Remote without those issues.

Anyways, great offer @tuxis! And great work from the Proxmox Team!

Looking forward testing it out further. :D
When PBS leaves the Beta, we finally have a *real* Veeam Alternative for PVE. :eek::D

EDIT:
My last successful backup ended at " INFO: End Time: 2020-09-28T15:02:37+02:00 " according to the Backup Job Log - after that the problems started. I initially thought it was because the VM was some GBs in size (~7GB) while the 3 LXCs before were all < 1GB.
Tuxis PBS shows all 4 Backups als valid tho.
 
The 500gb is only while the product is in beta, right? So after the beta, everyone that helped get your product stable and to market will need to pay for the service?
 
I just tried the Tuxis PBS and ran into kinda the same issue. pvestatd "stops working" until a service restart.
Also I had to kill all proxmox-backup-client processes to get by PVE instance work like it should again, because the Backup Job could not be stopped by the GUI.

Yes. This is an issue. Although backups and other tasks seems to be running fine, the pvestatd-calls hallt somewhere. Storage is not too fast, but metadata is on a 'special' device (and that one is fast). I also added a log-device to the zpool yesterday, which doesn't seem to be used.

I'll add a cache-disk too today, but I don't expect that to be very usefull since the data is not being read very often.

No VM was crashing or having other issues tho - it was just the UI getting messed up when having the PBS storage connected.
The PVE is located in a datacenter with 1 GBit/s uplink. Transmission speed was ~10-15MB/s (MB/s - not Mbit/s!).
The first 3 little LXCs run through the backup successfully, tho. :)

I'll setup an own PBS instance later to test if that works "better" in terms of connectivity.
Then I'll checkout if I can use Tuxis' PBS as Remote without those issues.

Anyways, great offer @tuxis! And great work from the Proxmox Team!

I just created bug 3044 for syncing to a remote via push. I think that makes sense, not sure how Proxmox feels about that though.

My last successful backup ended at " INFO: End Time: 2020-09-28T15:02:37+02:00 " according to the Backup Job Log - after that the problems started. I initially thought it was because the VM was some GBs in size (~7GB) while the 3 LXCs before were all < 1GB.
Tuxis PBS shows all 4 Backups als valid tho.

Yes, this task seems to have ended properly.
 
The 500gb is only while the product is in beta, right? So after the beta, everyone that helped get your product stable and to market will need to pay for the service?

As with most commercial companies, we try to make money, so yes. Everybody that helped get the Proxmox product stable has also been able to use free backupservices for a few months. If they feel it's worth their while is up to them.
 
As with most commercial companies, we try to make money, so yes. Everybody that helped get the Proxmox product stable has also been able to use free backupservices for a few months. If they feel it's worth their while is up to them.
Will the pricing be the same after the Beta?
I read something about 15€/TB monthly which sounds okay to me, since the incremental backups and data dedup would save lots of backup storage anyways.

I just created bug 3044 for syncing to a remote via push. I think that makes sense, not sure how Proxmox feels about that though.
I think both ways should be possible.
Pull is great because the primary PBS does not need to know the credentials of the offsite PBS for example, which would harden the offsite PBS against an attack, should the primary PBS get compromised.
Push is great if you cannot or just dont want to trust the Remote PBS to have credentials of the other PBS instance. Or if you cannot or don't want to expose your PBS instance to the public Internet.

I'll add a cache-disk too today, but I don't expect that to be very usefull since the data is not being read very often.
We could test if it improves restore times. :D
 
Will the pricing be the same after the Beta?
I read something about 15€/TB monthly which sounds okay to me, since the incremental backups and data dedup would save lots of backup storage anyways.

Yes, that would probably be the target-price. Probably not with the free 500GB.

I think both ways should be possible.
Pull is great because the primary PBS does not need to know the credentials of the offsite PBS for example, which would harden the offsite PBS against an attack, should the primary PBS get compromised.
Push is great if you cannot or just dont want to trust the Remote PBS to have credentials of the other PBS instance. Or if you cannot or don't want to expose your PBS instance to the public Internet.

Yes, both would be perfect.

We could test if it improves restore times. :D

There is currently 1TB of backups in the pool. I don't think a cache disk will help much (it's there now, but not much reading from it).
 
Finally got time to setup my own PBS instance for further testing.
The results are amazing so far!
Nearly saturating the gigabit while doing backups and the PBS server doesn't even break a sweat.
Put ~100GB into it and created a daily backup job to PBS on my PVE instance.

Also I didn't experience those issues with pvestatd this time - also the storage overview worked flawlessy - everything was shown and felt "snappy".
Will test further backups and of course some restores next days... :D

@tuxis
Sadly I wasn't able to add my PBS as a Remote to pull backups from my PBS to yours.
I did not explore the whole permission set of PBS yet, but i assume this is intentional?

Oh and would you mind to share the hardware specs of your PBS instance?
Im curious what kind of scale is required to keep up with the load of a public freebie-campaign. :D
Ex.: 50 Testers x 500GBs = 25TB Storage required... plus RAID penalty (parity disks) - let alone the CPU/RAM resources required to allow that amount of PVE servers to send backups (in parallel o_O) to the PBS.

If anyone is interested, I'm using following hardware for my PBS to backup a single PVE instance right now:
Intel Xeon E3-1245 V2 @ 3.40GHz
16 GB ECC RAM
2 x 3 TB HDD (Seagate ST33000650NS)
1 Gbit/s Uplink
 
Also I didn't experience those issues with pvestatd this time - also the storage overview worked flawlessy - everything was shown and felt "snappy".
Will test further backups and of course some restores next days... :D

I created bug https://bugzilla.proxmox.com/show_bug.cgi?id=3047 for looking into performance, so I'm diving into that.

@tuxis
Sadly I wasn't able to add my PBS as a Remote to pull backups from my PBS to yours.
I did not explore the whole permission set of PBS yet, but i assume this is intentional?

I think you can only grant access to /remote, which would allow every user to see everyones remotes, that's not very cool. I created feature-request https://bugzilla.proxmox.com/show_bug.cgi?id=3044 to allow pushing a local datastore to a remote box. That way, you do not need an admin on the remote to create the pull-request.

Oh and would you mind to share the hardware specs of your PBS instance?
Im curious what kind of scale is required to keep up with the load of a public freebie-campaign. :D
Ex.: 50 Testers x 500GBs = 25TB Storage required... plus RAID penalty (parity disks) - let alone the CPU/RAM resources required to allow that amount of PVE servers to send backups (in parallel o_O) to the PBS.

Currently, we run on a Proxmox VPS with the following specs:
4x KVM vCore (E52620v3@2.4 Ghz underneath)
16GB RAM
Zpool pbs:
- 2TB Ceph RBD disk on dadup (Spinning disks, 25km away)
- 20GB Ceph RBD disk on SSD as 'special device'
- 20GB Ceph RBD disk on SSD as 'log device' (unused due to the way PBS issues writes)
- 100GB Ceph RBD disk on SSD as 'cache device' (mostly useless due to the fact that 1TB of data is present which is only periodically read once, so next time you need it it has been pushed out)

I disabled autotrim on the zpool yesterday which seems to have helped a lot in terms of performance of the zpool.
 
Are there plans to offer the services to individuals too? I'd like to test it, but don't have a company and the registration form on the website requires to enter a VAT (I'm located in Germany)
 
Within the European Economic Zone, yes. Outside that, the hassle with VAT is too much.

Feel free to request the beta, I'll set you up this afternoon.
 
  • Like
Reactions: SirUffsALot
- 2TB Ceph RBD disk on dadup (Spinning disks, 25km away)
So, your storage is not local or at least "nearby" like a SAN/NAS in the same Rack or DC.
Is that storage connected over a WAN link or do you have a darkfiber? Or to ask different: Whats the bandwidth its connected with?
I only did a quick readup about your daDup service, so I don't really know what kind of performance to expect there, I'm just wondering if the latency of the remote storage is simply too much for an IO intensive task like doing backups. o_O
May fit for cold storage of already finished backups but what about fresh backups being live written?

There is currently 1TB of backups in the pool.
Since there is not much storage in use yet, could you try adding some directly attached HDDs or some local iSCSI/NFS as datastore to compare the performance?

From what i read in other posts so far, Proxmox seems to recommend local ZFS storage above anything else. While i also read that using NFS as storage backend is "possible". Not sure if thats even a supported use case (or will be one, after release) tho.
 
Last edited:
So, your storage is not local or at least "nearby" like a SAN/NAS in the same Rack or DC.
Is that storage connected over a WAN link or do you have a darkfiber? Or to ask different: Whats the bandwidth its connected with?
I only did a quick readup about your daDup service, so I don't really know what kind of performance to expect there, I'm just wondering if the latency of the remote storage is simply too much for an IO intensive task like doing backups. o_O
May fit for cold storage of already finished backups but what about fresh backups being live written?


Since there is not much storage in use yet, could you try adding some directly attached HDDs or some local iSCSI/NFS as datastore to compare the performance?

From what i read in other posts so far, Proxmox seems to recommend local ZFS storage above anything else. While i also read that using NFS as storage backend is "possible". Not sure if thats even a supported use case (or will be one, after release) tho.

Correct. We have a 10Gbit connection between the two datacenters. Although dadup is spinning disk, there are about 70 of them, so combined (Ceph) the cluster is able to perform pretty well. Latency between the VPS and daDup is about 0.5ms. When not in beta anymore, I think we would put the PBS server and the dadup cluster in the same datacenter, which would cut the latency in half.

We are using ZFS in the PBS box and have a 'local' SSD disk as a special device, so lots of the small metadata iops are handled by a fast disk. We are using the tiering options in ZFS. It would be even cooler if PBS would understand tiering, but considering how the storage is handled (chunks and dedup, etc.) I'm not expecting tiering in PBS to be possible.
 
Strange, this actually sounds pretty beefy imho.
I will give it another try later today with more / larger machines to backup. :D
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!