same for me, removing the HAProxy config let me access the web gui again. Weird behaviour. Why this is happening now after the Uprade to 4?This was my problem to
same for me, removing the HAProxy config let me access the web gui again. Weird behaviour. Why this is happening now after the Uprade to 4?This was my problem to
I am Running Proxmox Backup 3.4. I want to install PVE 9 on a new server. Will I be able to restore Backups coming from a PVE8.4 environment using Backup 3.4? I would like to stay with 3.4 for while but also want to have a PVE 9 test environment.Yes PVE 8.x is compatible as client for PBS 4.0
What exactly do you mean here? If it is co-installed you cannot upgrade just PBS to 4.0 while staying on PVE 8.x. If PBS is running in a VM, than you can upgrade the PBS instance within the VM just fine.
Yes, PVE 9.0 is compatible with PBS 3.4, and restoring backups created on 8.4 works as well, see also from the initial announcement:I am Running Proxmox Backup 3.4. I want to install PVE 9 on a new server. Will I be able to restore Backups coming from a PVE8.4 environment using Backup 3.4? I would like to stay with 3.4 for while but also want to have a PVE 9 test environment.
Q: Is Proxmox Backup Server still compatible with older clients or Proxmox VE releases?
A: We are actively testing the compatibility of all the major versions currently supported, including the previous one. This means that you can safely back up from Proxmox VE 8 to Proxmox Backup Server 4, or from Proxmox VE 9 to Proxmox Backup Server 3. However, full compatibility with major client versions that are two or more releases apart, like for example Proxmox VE 7 based on Debian 11 Bullseye and Proxmox Backup Server 4 based on Debian 13 Trixie, is supported on a best-effort basis only.
<?xml version='1.0' encoding='UTF-8'?>
<Error><Code>NotImplemented</Code><Message>Conditional object PUTs are not supported.</Message><RequestId>tx3145a330181342cf80f12-006894a8da</RequestId></Error>
<?xml version='1.0' encoding='UTF-8'?>
<Error><Code>NotImplemented</Code><Message>Conditional object PUTs are not supported.</Message><RequestId>txdeb0db15f8ef4f9dad49d-006894a8db</RequestId></Error>
<?xml version='1.0' encoding='UTF-8'?>
<Error><Code>NotImplemented</Code><Message>Conditional object PUTs are not supported.</Message><RequestId>tx313cf7a43b8f4fcaaaba4-006894a8db</RequestId></Error>
TASK ERROR: access time safety check failed: failed to upload chunk to s3 backend: chunk upload failed: unexpected status code 501 Not Implemented
Error: task failed (status access time safety check failed: failed to upload chunk to s3 backend: chunk upload failed: unexpected status code 501 Not Implemented)
Please try with the provider quirks set toHey there.
First of all, great release of PBS4 and also the new S3 feature. I was able to connect it successfully to an AWS S3 bucket and currently it works smooth for me.
However, I don't get it running on Synology C2 storage, which is also an S3 compatble storage and much cheaper than AWS S3. Trying to connect the S2 storage, I face the following issues:
Code:<?xml version='1.0' encoding='UTF-8'?> <Error><Code>NotImplemented</Code><Message>Conditional object PUTs are not supported.</Message><RequestId>tx3145a330181342cf80f12-006894a8da</RequestId></Error> <?xml version='1.0' encoding='UTF-8'?> <Error><Code>NotImplemented</Code><Message>Conditional object PUTs are not supported.</Message><RequestId>txdeb0db15f8ef4f9dad49d-006894a8db</RequestId></Error> <?xml version='1.0' encoding='UTF-8'?> <Error><Code>NotImplemented</Code><Message>Conditional object PUTs are not supported.</Message><RequestId>tx313cf7a43b8f4fcaaaba4-006894a8db</RequestId></Error> TASK ERROR: access time safety check failed: failed to upload chunk to s3 backend: chunk upload failed: unexpected status code 501 Not Implemented Error: task failed (status access time safety check failed: failed to upload chunk to s3 backend: chunk upload failed: unexpected status code 501 Not Implemented)
Any idea or are there any plans to implement Synology C2.
Kind regards,
Dirk
Skip If-None-Match header
, which you can find in the advanced settings of the S3 endpoint create/edit window.Looking up-thread to the posts on ha-proxy, it looks like there might be an issue if your proxy is doing the equivalent of whatever the "check" flag does in ha-proxy.I’m experencing too the web ui crash after upgrading to 4. My pbs is a VM virtualized in pve behind traefik reverse proxy.
This should be fixed with proxmox-backup-server 4.0.12-1 which was just uploaded to pbs-test repo (some CDN nodes are still syncing, so might take a few more minutes). At least it fixes one reproducer here and in any case an issue that could happen on connection acceptance and seemingly was even much harder to hit in PBS 3.Looking up-thread to the posts on ha-proxy, it looks like there might be an issue if your proxy is doing the equivalent of whatever the "check" flag does in ha-proxy.
See: https://forum.proxmox.com/threads/proxmox-backup-server-4-0-released.169306/post-789268
I'm not familiar with ha-proxy or Traeffik, but maybe try to figure out what that failing ha-proxy config option does and see if you can map it to your Traeffik config?
How long does it take from pbs-test repo to community repo?This should be fixed with proxmox-backup-server 4.0.12-1 which was just uploaded to pbs-test repo (some CDN nodes are still syncing, so might take a few more minutes). At least it fixes one reproducer here and in any case an issue that could happen on connection acceptance and seemingly was even much harder to hit in PBS 3.
That version also contains a fix for OIDC realm login and the HttpOnly cookies authentication flow.
There is no fixed timeline, but as the changes between the last version and this are small, they only include three targeted fixes, and as we got already quite a bit of positive feedback validating these fixes, we fast tracked the package updates and moved these changes now already.How long does it take from pbs-test repo to community repo?
It might (!) but we did not really test that, might work and depends a bit on what (additional) packages you installed, so cannot tell you anything for certain for this being untested and also potentially setup specific.short question:
in the PBS 3to4 upgrade wiki it is stated that minimum free space on "/" should be 10GB.
Can I upgrade with only 6GB free space or will I run into problems...?
Thanks for the quick reply!It might (!) but we did not really test that, might work and depends a bit on what (additional) packages you installed, so cannot tell you anything for certain for this being untested and also potentially setup specific.
Just got the update on the community repo. PBS is now working on tailscale!There is no fixed timeline, but as the changes between the last version and this are small, they only include three targeted fixes, and as we got already quite a bit of positive feedback validating these fixes, we fast tracked the package updates and moved these changes now already.
Hey Chris.Please try with the provider quirks set toSkip If-None-Match header
, which you can find in the advanced settings of the S3 endpoint create/edit window.
That should fix this provider specific limitation.
Hi,Some relevant logs for the PBS team:
backup01 proxmox-backup-api[xxxx]: authentication failure; rhost=[::ffff:xxx.xxx.xxx.xxx]:xxxxx user=xxxx@myopenidrealm msg=password authentication is not implemented for OpenID realms
backup01 proxmox-backup-api[xxxx]: POST /api2/json/access/ticket: 401 Unauthorized: [client [::ffff:1xxx.xxx.xxx.xxx]:xxxxx] permission check failed.
And why is this a problem? Not used RAM is wasted. But you can limit the ZFS ARC Cache:So now, PBS is taking DOUBLE the memory usage at idle after a nightly backup occurs. It would stay at around 46% usage, now I sit at 93%. You can see before the updated was almost at 50%, reboot, backup runs and sits at 93%. Reboot, it clears until another backup runs. The backups are all doing as they ahve done, and my Datastore is, and has been ZFS on PBS. Nothing has changed in my environment with the exception of updating PBS. Top doesn't really give me anything, the two top services are the proxmox-backup-proxy and -api services.
I'm not necessarily saying it's a problem, but it's an irregular issue from what's been normally running, then to update and seeing noticeable differences is worth mentioning, that's why I posted it. That can potentially also crash other PBS users out there if they are running out of memory because of this.And why is this a problem? Not used RAM is wasted. But you can limit the ZFS ARC Cache:
https://pbs.proxmox.com/docs/sysadmin.html#limit-zfs-memory-usage
Limit ZFS memory usage
It is good to use at most 50 percent (which is the default) of thesystem memory for ZFS ARC, to prevent performance degradation of thehost. Use your preferred editor to change the configuration in/etc/modprobe.d/zfs.conf and insert:
We use essential cookies to make this site work, and optional cookies to enhance your experience.