Proxmox Backup Server 4.0 BETA released!

Hello and thank you for everything.

I have a minio s3 server and I cannot get the endpoint to work via pbs, I get an error "Error: head object failed

proxmox-backup-manager s3 check S3-Minio-Test test-pbs --store-prefix /

Bash:
0: client error (Connect)
1: error:0A0000C6:SSL routines:tls_get_more_records:packet length too long:../ssl/record/methods/tls_common.c:662:, error:0A000139:SSL routines::record layer failure:../ssl/record/rec_layer_s3.c:696:
2: error:0A0000C6:SSL routines:tls_get_more_records:packet length too long:../ssl/record/methods/tls_common.c:662:, error:0A000139:SSL routines::record layer failure:../ssl/record/rec_layer_s3.c:696

If I run a test from pbs with awscli, I can see the buckets:


Bash:
aws --endpoint-url http://minio02-dev.xxxxxx:9000 s3 ls

iso-images
test-pbs

The s3 pbs configuration file:






Bash:
vi s3.cfg
s3-endpoint: S3-Minio-Test

access-key adminuser
endpoint minio01.plbs.info
port 9000
region us-east-1
secret-key Password@Minio!

Do you have any ideas?
Thanks very much!
Patrick.
 
Last edited:
Last edited:
  • Like
Reactions: patricklbs
Does PBS 4 support backing up PVE node configs?
As in, can you use the proxmox-backup-client for doing that yourself? Yes, since always, as you can also fuse-mount backups it makes manual restoration even relatively convenient.

But as in, is there some fully integrated method to nicely configure that and handle restore via API/UI just like Proxmox Mail Gateway has? No, not yet.

FWIW, PBS will probably not need to change at all (or much) here, the implementation for this will have to happen in PVE. But while it sounds relatively simple, people often mean very different things, from full 1:1 carbon copy restoration to selective config restoration to an "undo my mess to just make it work again™", and especially the last two are hard to get right in such a way that it is likely that the restore actually results in a valid setup again. Anyhow, not directly related to PBS, but FWIW a colleague recently started to evaluating this problem a bit more actively, but as mentioned, not a trivial thing, so do not expect results very soon.
 
  • Like
Reactions: SInisterPisces
It's great to hear this is being looked at, @t.lamprecht . It's something I've been wanting for a while. :)
As in, can you use the proxmox-backup-client for doing that yourself? Yes, since always, as you can also fuse-mount backups it makes manual restoration even relatively convenient.
Is there a guide somewhere for doing this? I've never used the backup client outside of PVE, and have little experience with manually configuring FUSE.

But as in, is there some fully integrated method to nicely configure that and handle restore via API/UI just like Proxmox Mail Gateway has? No, not yet.

FWIW, PBS will probably not need to change at all (or much) here, the implementation for this will have to happen in PVE. But while it sounds relatively simple, people often mean very different things, from full 1:1 carbon copy restoration
I should have been more explicit. This is what I was thinking of, covering the scenario where the easiest/fastest fix for an issue is just to clean install PVE and, during the install, connect to a PBS server to replicate a complete configuration.

I'm envisioning something similar to how OPNSense, for instance, asks if you want to import a full (optionally encrypted) configuration file during installation, and when installation is done, your old instance is just back like nothing ever happened.

The biggest difference versus, say, OPNSense, is that you'd need to authenticate to the PBS server somehow. OPNSense just uses an XML file on a thumb drive.

to selective config restoration
This sounds a bit like OPNSense/BSD boot environments. I think this is possible in Linux with ZFS snapshots, dependent on the user taking snapshots of PVE in whatever configured state they want to go back to?

Definitely a different problem than the above full system restore.

to an "undo my mess to just make it work again™", and especially the last two are hard to get right in such a way that it is likely that the restore actually results in a valid setup again. Anyhow, not directly related to PBS, but FWIW a colleague recently started to evaluating this problem a bit more actively, but as mentioned, not a trivial thing, so do not expect results very soon.

This sounds like magic. :P Unfortunately, it would also likely be constrained to fixing a short list of things like, e.g., restoring web GUI access or password reset, right?

We all do too much custom config to our PVE setups to make automatic repair an easy task.
 
  • Like
Reactions: Johannes S
Hey All,

Thanks for the release, been waiting for S3 backups for a while.

Successfully configured Wasabi and a sync job from local to wasabi, however do note that i had to symlink a storage folder inside my zfs pool where the local Datastore is located, this is because my root drive doesn't have 2 tb to store a local copy, i noticed that if i didn't symlink it and just tried to add a datastore inside the zfs pool it says it doesn't like nested datastores.

Makes sense but when I have no other option to symlink to the nested datastore because there's nowhere else to store a local copy of all the data it works as a workaround but obviously duplicates the statistics

Code:
lrwxrwxrwx 1 backup backup 36 Jul 31 13:48 wasabi -> /mnt/datastore/backup-storage/wasabi

1753939635562.png
 
Is there a guide somewhere for doing this? I've never used the backup client outside of PVE, and have little experience with manually configuring FUSE.
No step-by-step guide, but the reference docs does contain the relevant bits.
For using the client to access a PBS in general see: https://pbs.proxmox.com/docs/backup-client.html#backup-repository-locations

For mounting an archive see: https://pbs.proxmox.com/docs/backup-client.html#mounting-of-archives-via-fuse
I should have been more explicit. This is what I was thinking of, covering the scenario where the easiest/fastest fix for an issue is just to clean install PVE and, during the install, connect to a PBS server to replicate a complete configuration.

I'm envisioning something similar to how OPNSense, for instance, asks if you want to import a full (optionally encrypted) configuration file during installation, and when installation is done, your old instance is just back like nothing ever happened.

The biggest difference versus, say, OPNSense, is that you'd need to authenticate to the PBS server somehow. OPNSense just uses an XML file on a thumb drive.
This will probably be the simplest (relatively speaking!) variant to implement. One idea is to adapt the installation ISO to allow providing a PBS server, credentials and a backup definition and manually or through the auto-installer preparation and then pull the image directly from there, but not fully fleshed out yet.
This sounds a bit like OPNSense/BSD boot environments. I think this is possible in Linux with ZFS snapshots, dependent on the user taking snapshots of PVE in whatever configured state they want to go back to?
Something a bit more integrated, like network config (needs the system to still have some basic network access to the PBS), or corosync config and RRD metrics, but yeah this is much harder to sanely do. But it might help for scenarios where the virtual guests data all are on some external/separate storage and the host gets freshly reinstalled and then storage, virtual guest, user/acl configs should be recovered 1:1, but again, nothing really thought out.
Unfortunately, it would also likely be constrained to fixing a short list of things like, e.g., restoring web GUI access or password reset, right?
yeah, if even, it's probably the furthest away, if it ever can happen wouldn't hold my breadth for that (at least for a real magic variant ;))
 
Makes sense but when I have no other option to symlink to the nested datastore because there's nowhere else to store a local copy of all the data it works as a workaround but obviously duplicates the statistics
A simple variant that would work on all FS would be to move the existing one into a subdir, but as ZFS can do a bit more than simple FS, you could also create a new ZFS dataset backed by the same ZFS pool but at some different mountpoint instead.

Nested datastores are really dangerous, we got some checks to avoid the most obvious and worst things, but it's in general not supported as any missed edge case in those checks might throw of GC and remove stuff from the "inner" datastore, that's why we do not really support it.
 
A simple variant that would work on all FS would be to move the existing one into a subdir, but as ZFS can do a bit more than simple FS, you could also create a new ZFS dataset backed by the same ZFS pool but at some different mountpoint instead.

Nested datastores are really dangerous, we got some checks to avoid the most obvious and worst things, but it's in general not supported as any missed edge case in those checks might throw of GC and remove stuff from the "inner" datastore, that's why we do not really support it.
If I move the existing Datastore into a subdirectory, how do I update the GUI to reflect that change?
 
Hi,
Makes sense but when I have no other option to symlink to the nested datastore because there's nowhere else to store a local copy of all the data it works as a workaround but obviously duplicates the statistics
just to clarify, the local cache for S3 backed does not need to have full size of the backup snapshots to store, and most importantly cannot be a pre-existing datastore! It is a cache and can be limited. I would recommend to use either a dedicated small disk or partition, or if you already use zfs a dedicated dataset with quota (64G to 128G should already be plenty). I will send a patch for the documentation to make this clear.

If I move the existing Datastore into a subdirectory, how do I update the GUI to reflect that change?
First of, you must set both the datastores into maintenance mode offline, so no operations are performed on the datastores. Then you can move the sub-directory to e.g. a new dataset with dedicated mountpoint and adapt the datastore's path in /etc/proxmox-backup/datastore.cfg to reflect this change. Once this is done, you can clear the maintenance mode for both datastores again.
 
  • Like
Reactions: complexplaster27
Hi,

just to clarify, the local cache for S3 backed does not need to have full size of the backup snapshots to store, and most importantly cannot be a pre-existing datastore! It is a cache and can be limited. I would recommend to use either a dedicated small disk or partition, or if you already use zfs a dedicated dataset with quota (64G to 128G should already be plenty). I will send a patch for the documentation to make this clear.


First of, you must set both the datastores into maintenance mode offline, so no operations are performed on the datastores. Then you can move the sub-directory to e.g. a new dataset with dedicated mountpoint and adapt the datastore's path in /etc/proxmox-backup/datastore.cfg to reflect this change. Once this is done, you can clear the maintenance mode for both datastores again.
Thanks for the advice, i'll rectify this straight away.
 
My installation is as follows:

PVE(local)->PBS(local)->PBS(VPS)->S3

Could I now eliminate the PBS on the VPS if I connect the S3 storage directly to my local PBS? Wouldn't the PBS on the VPS then be of any benefit, or am I missing something?
 
My installation is as follows:

PVE(local)->PBS(local)->PBS(VPS)->S3

Could I now eliminate the PBS on the VPS if I connect the S3 storage directly to my local PBS? Wouldn't the PBS on the VPS then be of any benefit, or am I missing something?
An attacker who took over your local PBS could also access your S3-Datastore while (given fitting permissions and firewall rules) it's possible to setup a remtote PBS which is allowed to do pull syncs from your local PBS without allowing the local PBS to connect to the remote one
 
  • Like
Reactions: daschmidt
My installation is as follows:

PVE(local)->PBS(local)->PBS(VPS)->S3

Could I now eliminate the PBS on the VPS if I connect the S3 storage directly to my local PBS? Wouldn't the PBS on the VPS then be of any benefit, or am I missing something?
I've the same question, but i think we cannot eliminate the vps. I think the only way to make the backups inaccessible is to have two PBS systems, where the remote PBS performs a pull of the backups from the local PBS.
 
  • Like
Reactions: Johannes S
I've the same question, but i think we cannot eliminate the vps. I think the only way to make the backups inaccessible is to have two PBS systems, where the remote PBS performs a pull of the backups from the local PBS.
Exactly. The idea is basically that the remote PBS is not accessible at all expect over a management VPN you only access from the admins PC/notebook. Since the pull sync doesn't need the remote PBS to be accessible from the local infrastructure, this makes life for Ransomware quite hard. Even more if you set permissions, that only the prune jobs on the remote PBS or the admin can remove/alter the backups.
If you ever need to restore them you would create read-only user/api token, temporary create a vpn connection between your local network and the remote PBS, restore everything and afterwards remove the access again.