Tuxis launches free Proxmox Backup Server BETA service

Hello!

@tuxis , thanks again for providing a free plan for this service. Getting it up and running was, admittedly, a bit trickier than I expected, but that was entirely because it's been long enough that I kind of forgot all the steps to configure a new Proxmox Backup Server instance. ;)

The Tuxis web UI and necessary steps there were dead simple and really intuitive. The only thing I would have had to look up if I hadn't absorbed it though this thread is that the replication/pull-based backup service is a paid feature.

One thing I was curious about: when I'm ready to upgrade to a paid plan, is my existing PBS instance upgraded, or do I have to set up a new instance (and migrate my data)? Or do I end up with two instances?

Sort of in the same vein: I noticed that you also offer a hosting option for a dedicated PBS server. Which customers would you recommend that for? I feel like the shared-hosting-based PBS is a great fit for my home/home office set up.

Performance and Ping. There was discussion about this up-thread, some of which is a few years old at this point, so I wanted to provide some additional (non-scientifically collected) benchark data.

I'm in Dallas, Texas, US, and am seeing a pretty consistent 120-125 ms ping to my assigned PBS server in NL, which seems excellent for a server in Amsterdam (~4902 km / 3046 miles away, across an ocean). I have 1 Gbps symmetrical fiber, and can consistently hit full-speed (940 Mbps both ways) to a well-equipped speed test server.

ETA: Client-side encryption is enabled.
ETA 2: The Proxmox host is an HP EliteMini G9 600 with an i5-12700T and 96 GiB of RAM, running the latest version of Proxmox 8.2.

Here's the log of my first backup, in case anyone is curious. Right now, I have it set to just back up a Debian 12 LXC where my Unifi controller lives. The LXC is running, and I'd never backed up anything, so I felt like this was a worst-case for an LXC on my end.

Code:
INFO: root.pxar: had to backup 2.535 GiB of 3.693 GiB (compressed 1.167 GiB) in 120.44 s (average 21.554 MiB/s)
INFO: root.pxar: backup was done incrementally, reused 1.158 GiB (31.4%)
INFO: Uploaded backup catalog (859.361 KiB)
INFO: Duration: 127.35s
INFO: End Time: Wed Dec 11 15:16:03 2024

Aside: PBS' default settings just doing compression so seamlessly and so well is just awesome.

~20 MB/s (160 Mbps) for free from Dallas to Amsterdam is pretty spectacular. :)
Caveat: That's an average speed, not an exact number. I'm sure it fluctuates.

@tuxis , one thing I'm curious about: it mentions reusing 1.158 GiB. I'm assuming that's because there's other datastores on the shared PBS instance that I can't see, and someone's running a similar enough Debian LXC that it could reuse some of it? If so, you might want to mention that somewhere in the docs for the free version, as an advantage. Deduping across user datastores looks like it maximizes that 150 GB. :)

Garbage collection, pruning, and verifying: Do you have recommended settings for these? I've used my settings from my own PBS just to have something set up, but I realize that might not be ideal for a remote instance.

Here's the full output of the backup, for anyone curious what it looks like:

Code:
NFO: starting new backup job: vzdump 99901 --all 0 --node andromeda2 --storage Tuxis150GB --fleecing 0 --notes-template '{{guestname}}' --mode snapshot
INFO: Starting Backup of VM 99901 (lxc)
INFO: Backup started at 2024-12-11 15:13:51
INFO: status = running
INFO: CT Name: subspace
INFO: including mount point rootfs ('/') in backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'
INFO: creating Proxmox Backup Server archive 'ct/99901/2024-12-11T21:13:51Z'
INFO: set max number of entries in memory for file-based backups to 1048576
INFO: run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp2308863_99901/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 99901 --backup-time 1733951631 --entries-max 1048576 --repository DB2685@pbs!AndromedaClusterToken@pbs005.tuxis.nl:DB2685_AndromedaCluster150
INFO: Starting backup: ct/99901/2024-12-11T21:13:51Z
INFO: Client name: andromeda2
INFO: Starting backup protocol: Wed Dec 11 15:13:55 2024
INFO: No previous manifest available.
INFO: Upload config file '/var/tmp/vzdumptmp2308863_99901/etc/vzdump/pct.conf' to '$TARGET' as pct.conf.blob
INFO: Upload directory '/mnt/vzsnap0' to '$TARGET' as root.pxar.didx
INFO: root.pxar: had to backup 2.535 GiB of 3.693 GiB (compressed 1.167 GiB) in 120.44 s (average 21.554 MiB/s)
INFO: root.pxar: backup was done incrementally, reused 1.158 GiB (31.4%)
INFO: Uploaded backup catalog (859.361 KiB)
INFO: Duration: 127.35s
INFO: End Time: Wed Dec 11 15:16:03 2024
INFO: adding notes to backup
INFO: cleanup temporary 'vzdump' snapshot
INFO: Finished Backup of VM 99901 (00:02:13)
INFO: Backup finished at 2024-12-11 15:16:04
INFO: Backup job finished successfully
TASK OK

EDIT -- VM Backup Performance seems to be better than LXC Performance?
I decided to add a VM, as well. This is my MariaDB VM, so it's got a number of virtual SCSI disks (getting actual physical remote storage in my LAN for the VM data storage like this is on the list).

The backup for the VM seems to have gone much faster than for the LXC--I assume VM incremental backup/compression is just faster. I'm still pretty new at all this...
Code:
NFO: starting new backup job: vzdump 99901 99902 --notes-template '{{guestname}}' --storage Tuxis150GB --node andromeda2 --all 0 --fleecing 0 --mode snapshot
INFO: Starting Backup of VM 99901 (lxc)
INFO: Backup started at 2024-12-11 17:45:24
INFO: status = running
< . . . LXC backup not shown . . . >

INFO: Starting backup: ct/99901/2024-12-11T23:45:24Z
INFO: Client name: andromeda2
INFO: Starting backup protocol: Wed Dec 11 17:45:24 2024
INFO: Downloading previous manifest (Wed Dec 11 15:13:51 2024)
INFO: Upload config file '/var/tmp/vzdumptmp2392935_99901/etc/vzdump/pct.conf' to $TARGET as pct.conf.blob
INFO: Upload directory '/mnt/vzsnap0' to $TARGET as root.pxar.didx
INFO: root.pxar: had to backup 175.948 MiB of 3.694 GiB (compressed 35.632 MiB) in 8.45 s (average 20.834 MiB/s)
INFO: root.pxar: backup was done incrementally, reused 3.522 GiB (95.3%)
INFO: Uploaded backup catalog (859.361 KiB)
INFO: Duration: 10.85s
INFO: End Time: Wed Dec 11 17:45:35 2024
INFO: adding notes to backup
INFO: cleanup temporary 'vzdump' snapshot
INFO: Finished Backup of VM 99901 (00:00:13)
INFO: Backup finished at 2024-12-11 17:45:37
INFO: Starting Backup of VM 99902 (qemu)
INFO: Backup started at 2024-12-11 17:45:37
INFO: status = running
INFO: VM Name: memory-alpha2
INFO: include disk 'scsi0' 'vmStore64k:vm-99902-disk-1' 64G
INFO: include disk 'scsi1' 'vmStore16k:vm-99902-disk-0' 16G
INFO: include disk 'scsi2' 'vmStore64k:vm-99902-disk-2' 16G
INFO: include disk 'scsi3' 'vmStore64k:vm-99902-disk-3' 2G
INFO: include disk 'efidisk0' 'vmStore64k:vm-99902-disk-0' 1M
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: snapshots found (not included into backup)
INFO: creating Proxmox Backup Server archive 'vm/99902/2024-12-11T23:45:37Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '716dd2c2-38cc-4c1e-82d2-e8d13b56611c'
INFO: resuming VM again
INFO: efidisk0: dirty-bitmap status: existing bitmap was invalid and has been cleared
INFO: scsi0: dirty-bitmap status: existing bitmap was invalid and has been cleared
INFO: scsi1: dirty-bitmap status: existing bitmap was invalid and has been cleared
INFO: scsi2: dirty-bitmap status: existing bitmap was invalid and has been cleared
INFO: scsi3: dirty-bitmap status: existing bitmap was invalid and has been cleared
INFO:   9% (8.9 GiB of 98.0 GiB) in 3s, read: 3.0 GiB/s, write: 80.0 MiB/s
INFO:  19% (19.4 GiB of 98.0 GiB) in 6s, read: 3.5 GiB/s, write: 24.0 MiB/s
INFO:  29% (29.2 GiB of 98.0 GiB) in 9s, read: 3.3 GiB/s, write: 36.0 MiB/s
INFO:  35% (34.4 GiB of 98.0 GiB) in 12s, read: 1.7 GiB/s, write: 30.7 MiB/s
INFO:  36% (35.3 GiB of 98.0 GiB) in 25s, read: 68.6 MiB/s, write: 54.5 MiB/s
INFO:  37% (36.7 GiB of 98.0 GiB) in 42s, read: 88.0 MiB/s, write: 36.5 MiB/s
INFO:  39% (38.6 GiB of 98.0 GiB) in 45s, read: 648.0 MiB/s, write: 61.3 MiB/s
INFO:  41% (40.4 GiB of 98.0 GiB) in 48s, read: 597.3 MiB/s, write: 42.7 MiB/s
INFO:  45% (44.9 GiB of 98.0 GiB) in 51s, read: 1.5 GiB/s, write: 54.7 MiB/s
INFO:  49% (48.9 GiB of 98.0 GiB) in 54s, read: 1.3 GiB/s, write: 14.7 MiB/s
INFO:  51% (50.9 GiB of 98.0 GiB) in 57s, read: 681.3 MiB/s, write: 124.0 MiB/s
INFO:  52% (51.1 GiB of 98.0 GiB) in 1m, read: 84.0 MiB/s, write: 60.0 MiB/s
INFO:  53% (52.8 GiB of 98.0 GiB) in 1m 5s, read: 338.4 MiB/s, write: 93.6 MiB/s
INFO:  56% (54.9 GiB of 98.0 GiB) in 1m 8s, read: 718.7 MiB/s, write: 42.7 MiB/s
INFO:  58% (56.9 GiB of 98.0 GiB) in 1m 11s, read: 674.7 MiB/s, write: 16.0 MiB/s
INFO:  66% (64.9 GiB of 98.0 GiB) in 1m 14s, read: 2.7 GiB/s, write: 85.3 MiB/s
INFO:  70% (69.5 GiB of 98.0 GiB) in 1m 17s, read: 1.5 GiB/s, write: 20.0 MiB/s
INFO:  80% (78.5 GiB of 98.0 GiB) in 1m 20s, read: 3.0 GiB/s, write: 58.7 MiB/s
INFO:  86% (84.9 GiB of 98.0 GiB) in 1m 23s, read: 2.1 GiB/s, write: 17.3 MiB/s
INFO:  93% (91.4 GiB of 98.0 GiB) in 1m 26s, read: 2.2 GiB/s, write: 45.3 MiB/s
INFO:  98% (96.1 GiB of 98.0 GiB) in 1m 29s, read: 1.6 GiB/s, write: 92.0 MiB/s
INFO: 100% (98.0 GiB of 98.0 GiB) in 1m 36s, read: 284.1 MiB/s, write: 50.9 MiB/s
INFO: backup is sparse: 93.19 GiB (95%) total zero data
INFO: backup was done incrementally, reused 93.25 GiB (95%)
INFO: transferred 98.00 GiB in 101 seconds (993.6 MiB/s)
INFO: adding notes to backup
INFO: Finished Backup of VM 99902 (00:01:45)
INFO: Backup finished at 2024-12-11 17:47:22
INFO: Backup job finished successfully
TASK OK

I have 1 Gbps upload, so this is approaching theoretical max for me. Though, I think part of why it looks so good is that 95 percent of the total backup for that VM is empty, since it's a sparse back up of thinly provisioned VirtIO SCSI drives.
 
Last edited:
  • Like
Reactions: UdoB
Separately from the above, I've noticed that my instance is pruning once every hour. I assume that's intended, to keep the space usage as low as possible?

Also: I'm getting an email notification once an hour, for everything related to pruning (including prune success).
Is there any reason I shouldn't reduce that to the default errors-only setting? I don't mind the emails, but the popup notifications on my phone are a bit less than ideal. :)
 
@tuxis Just signed up for a free account but creating a pbs failed because "In dit land kan er op dit moment geen gratis PBS afgenomen kan worden".
 
Just registered for the free version and though I do understand Dutch i selected German as UI Language (top right corner) and Germany as my country of origin.

I can confirm that creating a free PBS service in the Netherlands is currently not available, but I can also select Germany as hosting location and this works.
 
Last edited:
Just registered for the free version and though I do understand Dutch is selected German as UI Language (top right corner) and Germany as my country of origin.

I can confirm that creating a free PBS service in the Netherlands is currently not available, but I can also select Germany as hosting location and this works.

I'm from Belgium, think the offer is only available for Dutch and Germany?
 
I was finally ready to move to paid service but it seems unavailable. :confused:
I can select only Netherlands and "no pbs can be purchased in this country"

tuxis ko.JPG
 
Currently, creating free PBS accounts is not possible and sometimes paid to for non German customers. We are running into some performance issues and we have to expand space. I hope you understand that we have to make sure that the performance good, even for free accounts.
We will keep you informed.

German companies can create accounts as usual.
 
Last edited:
Hello!

First of all, thank you for offering such a free plan. For home server users it is certainly a very useful addition. But I have a problem and can't find a solution. I synchronize my backups to you in the cloud with a push every day at 01:30. No backup, Prune, GC or other sync jobs are running at this time. Nevertheless, I get the following error message every time:

Code:
2024-12-18T01:45:53+01:00: Percentage done: 82.14% (11/14 groups, 1/2 snapshots in group #12)
2024-12-18T01:45:53+01:00: Percentage done: 85.71% (12/14 groups)
2024-12-18T01:45:53+01:00: Encountered errors: unable to acquire lock on backup group directory "/mnt/datastore/backup001/FL******/Cloud-Backup/ct/113" - another backup is already running
2024-12-18T01:45:53+01:00: Failed to push group ct/113 to remote!
2024-12-18T01:45:53+01:00: skipped: 12 snapshot(s) (2024-12-11T17:14:51Z .. 2024-12-16T23:13:30Z) - older than the newest snapshot present on sync target
2024-12-18T01:45:53+01:00: skipped: 2 snapshot(s) (2024-12-17T05:13:47Z .. 2024-12-17T11:13:35Z) - due to transfer-last

The next day it is then at another ct. Is this because I am running 3.3.2 locally and you are running 3.3.0 in the cloud?

Here are the local taks:

CleanShot 2024-12-18 at 10.36.41.png
Here are the cloud tuxis tasks:
CleanShot 2024-12-18 at 10.37.32.png
I would appreciate any help.

Regards
Daniel
 
Last edited:
Hello!

First of all, thank you for offering such a free plan. For home server users it is certainly a very useful addition. But I have a problem and can't find a solution. I synchronize my backups to you in the cloud with a push every day at 01:30. No backup, Prune, GC or other sync jobs are running at this time. Nevertheless, I get the following error message every time:

Code:
2024-12-18T01:45:53+01:00: Percentage done: 82.14% (11/14 groups, 1/2 snapshots in group #12)
2024-12-18T01:45:53+01:00: Percentage done: 85.71% (12/14 groups)
2024-12-18T01:45:53+01:00: Encountered errors: unable to acquire lock on backup group directory "/mnt/datastore/backup001/FL******/Cloud-Backup/ct/113" - another backup is already running
2024-12-18T01:45:53+01:00: Failed to push group ct/113 to remote!
2024-12-18T01:45:53+01:00: skipped: 12 snapshot(s) (2024-12-11T17:14:51Z .. 2024-12-16T23:13:30Z) - older than the newest snapshot present on sync target
2024-12-18T01:45:53+01:00: skipped: 2 snapshot(s) (2024-12-17T05:13:47Z .. 2024-12-17T11:13:35Z) - due to transfer-last

The next day it is then at another ct. Is this because I am running 3.3.2 locally and you are running 3.3.0 in the cloud?

Here are the local taks:

View attachment 79321
Here are the cloud tuxis tasks:
View attachment 79320
I would appreciate any help.

Regards
Daniel
Hi Daniel,

At a first glance it looks like a simple scheduling issue with overlapping issues referring to the 'another backup is already running' part.
We'd be happy to assist you in resolving this error you're receiving, could you please send us an email at support@tuxis.nl ?
 
  • Like
Reactions: Johannes S
We're currently running into two issues. One of which is hopefully going to be resolved tomorrow; a lack of incoming hardware. We've had some hardware in backorder (and incorrect deliveries) which should come in this week.

The other issue is a performance issue. The service is very popular, the amount of accounts had been growing pretty fast in the last few months. And it seems, not surprisingly, that > 1000 datastores is somewhat of an issue for PBS. We are obviously deployinhg PBS in another way than most of the users of PBS, so we're working on improving the performance of PBS with a larger amount of datastores.

For the time being, performance would be improved if all the users would use API-keys to authenticate with the PBS. That authentication-method is much 'cheaper' than username/password authentication, (we're doing about 240 authentication requests per second, on the busiest server).

I'll get someone to document the preferred method of setting that up. In the meantime, a little more patience as we're awaiting the courier with our fresh new set of hardware..



PS: I've decided to start posting on my own account, not the tuxis account.
 
For the time being, performance would be improved if all the users would use API-keys to authenticate with the PBS. That authentication-method is much 'cheaper' than username/password authentication, (we're doing about 240 authentication requests per second, on the busiest server).

I'll get someone to document the preferred method of setting that up. In the meantime, a little more patience as we're awaiting the courier with our fresh new set of hardware..


I set mine up with an API token when I got my account a couple of months ago to match my home PBS config, and have been getting stable, fast connections with it. I think, overall, it's the way PBS itself prefers to work. Using user/pass authentication has always just seemed to take a bit more effort to get right.

The docs cover setting it up, but focus on using the CLI, so you kind of have to work backwards from that to use the GUI:
https://pbs.proxmox.com/docs/user-management.html#api-tokens

It's not hard once you've done it once, but it's definitely not intuitive the first time …
Thanks for putting a guide together! :)
 
We're currently running into two issues. One of which is hopefully going to be resolved tomorrow; a lack of incoming hardware. We've had some hardware in backorder (and incorrect deliveries) which should come in this week.

The other issue is a performance issue. The service is very popular, the amount of accounts had been growing pretty fast in the last few months. And it seems, not surprisingly, that > 1000 datastores is somewhat of an issue for PBS. We are obviously deployinhg PBS in another way than most of the users of PBS, so we're working on improving the performance of PBS with a larger amount of datastores.

For the time being, performance would be improved if all the users would use API-keys to authenticate with the PBS. That authentication-method is much 'cheaper' than username/password authentication, (we're doing about 240 authentication requests per second, on the busiest server).

I'll get someone to document the preferred method of setting that up. In the meantime, a little more patience as we're awaiting the courier with our fresh new set of hardware..



PS: I've decided to start posting on my own account, not the tuxis account.

I fully understand that the holiday season and hardware shortages can cause delays. However, tomorrow will mark one month that I've been waiting for the paid service to become available.

I kindly ask if there is any update on the status of my request. If it's not possible to arrange the service soon, I may need to explore other solutions.
Thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!