I think originally the reasoning was that it's not useful for the client to store that information. that it doesn't render correctly for you seems strange and likely a bug, for me it does render correctly.
what does "pvesh get /cluster/acme/account/test-LEv2 --output-format json-pretty" print?
the information is stripped when storing, yes. the contact field is not used by the ACME client, but by the CA. you can always get the account info from the CA using the key, which is stored.
@LKo could you try to trigger the issue with "proxmox-backup-client benchmark --repository ..." ? that will (after doing a little exercise for your CPU) upload a stream of fake chunks for 5s and wait until they are all processed. if you can reproduce it using that (e.g., in a loop until it...
does your migration network use MTU 9000? are you on the (default) 6.17 kernel? if the answer to both is yes, could you try booting into the 6.14 kernel and see if the problem goes away in that case?
das klingt so als haettest du das archive keyring paket an den falschen ort runter geladen, before du PVE (on top of Debian?) installiert hast.. die beiden angegebenen files kannst du auf jeden fall loeschen, die haben in dem ordner nix zu suchen ;)
like I said - if you are worried about this scenario, you can give PDM read-only access and use it as a dashboard. if you want it to (actively) manage PVE systems, you need to give it the privileges to do so, and secure it/restrict access to it accordingly.
how did that second command error out? the error is not contained in the strace because of the string censoring ;) could you run strace on the first command as well? thanks!
I suspect the erroring out was caused by:
1175:ioctl(8, BLKZEROOUT, [2147479552, 3584]) = -1 EINVAL (Invalid argument)...
if your PDM is compromised, and you login using your PVE admin credentials (under your proposed scheme), the same is true. if you are worried about this risk, don't use your PDM for administration at all, just use it as a dashboard.
to shed some further light into the internals of PBS: blobs are never referencing chunks. blobs and chunks are the same thing format-wise, just used for different purposes:
- chunk: blob stored in the chunk store, to be referenced by indices via its digest
- blob: blob stored in a metadata dir...
please post a backup task log.
client-side deduplication can only happen if there is a previous snapshot on the backup target (datastore+namespace+group!) that is not a in a verification-failed state. based on your description I suspect you have created new backup groups in a new empty...
that would just be more complicated, but still require PDM to have all that access to PVE.. it would also make certain optimizations impossible, like collecting metrics/tasks/status/.. once and exposing it to different PDM users (with filtering).
sorry to hear that. if the VM is still around, you could run another backup, that might re-upload some of the missing chunks and thus "heal" some of the backups.
this would require a reproducer first. but yes, it is possible the issue is triggered by a certain network workload/traffic pattern/.., which might very well be specific to our code (or otherwise very rare). we can't make code more robust if we don't know what the actual cause of the problem is.