to expand on what Thomas already posted - you obviously need to point your clients (including PVE) at the namespace. to actually "move" (copy) the backup groups and snapshots, you can use a sync job (with the remote pointing to the same PBS instance) or a one-off
proxmox-backup-manager pull
. when moving from datastore A root namespace to datastore A namespace foo, only the metadata (manifest and indices and so on) will be copied, the chunks will be re-used. when moving from datastore A to datastore B the chunks that are not already contained in B will of course be copied as well, no matter whether namespaces are involved at either side
.
Did that and moved everything from datastore "PBS_DS1" namespace "Root" (stores my weekly/monthly/annualy + protected manual stop mode backups) to datastore "PBS_DS1" namespace "MainCluster/Weekly".
Also moved everything from datastore "PBS_DS2" namespace "Root" (stores my daily snapshot mode backups) to datastore "PBS_DS1" namespace "MainCluster/Daily".
Used a "local" sync job for that. If anyone else needs to do this:
1.) you need to create a "Remote" first. Just add the Fingerprint, IP, and credentials of your local PBS as a new "Remote" and call it something like "Local". Would be nice if the "add sync job" GUI would already offer "Local" as a Remote PBS without needing to create such a Remote first. I guess PBS itself should know its own IP, fingerprint and so on (but maybe not the password?).
2.) use "max-depth" of 0 so you only sync the groups of the Root namespace and not also the child namespaces if you want to sync within the same datastore
3.) keep in mind that the synced groups loose the comment/note and snapshots loose the protected flag when doing a sync
But now I got two questions:
1.) Now I got every group/snapshot twice. Once in the Root namespaces and once in my custom namespaces. I guess I can just delete all groups in the root namespaces now without loosing any chunks so that my groups/snapshots are moved and not copied (except for the backups of cause that I synced from "PBS_DS2" to "PBS_DS1" where I don't want any chunks in "PBS_DS2" anymore so I can delete that datastore)?
2.) I also got a third namespace "MainCluster/Manual" where I want to store my manual backups. Right now they are mixed with my automatic weekly backups in the MainCluster/Weekly namespace because previously I stored them together with the weekly backups because I don't wanted to create a third datastore for them (to don't waste even more space). The Sync Job got group filters. Filtering by groups won't help much as the same groups contain "automatic weekly" and "manual" snapshots. Would it be possible to use the regex group filters to only sync specific snapshots like "ct/136/2022-02-23T19:59:59Z" or is it only possible to regex groups (like the name "group filters" would imply)? All my manual snapshot got the protected flag (atleast in the root nameset where that flag wasn't removed) but I guess its not possible to only/not sync snapshots that are protected.
I guess the easiest way would be to sync my (not deleted yet) snapshots again from datastore "PBS_DS1" namespace "Root" to "PBS_DS1" namespace "MainCluster/Manual" so I got the same groups+snapshots in "MainCluster/Manual" and "MainCluster/Weekly" and then just manually delete all weekly backups from the "MainCluster/Manual" namespace and all manual backups from the "MainCluster/Weekly" namespace?
Edit:
Both Sync Jobs finished with "ok" and looks like all groups and snapshots got synced but the task log is full of errors like these:
Code:
2022-05-18T23:59:29+02:00: starting new backup reader datastore 'PBS_DS2': "/mnt/pbs2"
2022-05-18T23:59:29+02:00: protocol upgrade done
2022-05-18T23:59:29+02:00: GET /download
2022-05-18T23:59:29+02:00: download "/mnt/pbs2/vm/107/2022-05-16T03:34:40Z/index.json.blob"
2022-05-18T23:59:29+02:00: GET /download
2022-05-18T23:59:29+02:00: download "/mnt/pbs2/vm/107/2022-05-16T03:34:40Z/qemu-server.conf.blob"
2022-05-18T23:59:29+02:00: GET /download
2022-05-18T23:59:29+02:00: download "/mnt/pbs2/vm/107/2022-05-16T03:34:40Z/fw.conf.blob"
2022-05-18T23:59:29+02:00: GET /download
2022-05-18T23:59:29+02:00: download "/mnt/pbs2/vm/107/2022-05-16T03:34:40Z/drive-scsi1.img.fidx"
2022-05-18T23:59:29+02:00: register chunks in 'drive-scsi1.img.fidx' as downloadable.
2022-05-18T23:59:29+02:00: GET /download
2022-05-18T23:59:29+02:00: download "/mnt/pbs2/vm/107/2022-05-16T03:34:40Z/drive-scsi0.img.fidx"
2022-05-18T23:59:29+02:00: register chunks in 'drive-scsi0.img.fidx' as downloadable.
2022-05-18T23:59:29+02:00: GET /download
2022-05-18T23:59:29+02:00: download "/mnt/pbs2/vm/107/2022-05-16T03:34:40Z/client.log.blob"
2022-05-18T23:59:29+02:00: TASK ERROR: connection error: not connected
Code:
2022-05-18T23:33:54+02:00: starting new backup reader datastore 'PBS_DS1': "/mnt/pbs"
2022-05-18T23:33:54+02:00: protocol upgrade done
2022-05-18T23:33:54+02:00: GET /download
2022-05-18T23:33:54+02:00: download "/mnt/pbs/host/HypervisorBackup/2021-10-10T16:17:06Z/index.json.blob"
2022-05-18T23:33:54+02:00: GET /download
2022-05-18T23:33:54+02:00: download "/mnt/pbs/host/HypervisorBackup/2021-10-10T16:17:06Z/ata-INTEL_SSDSC2BA100G3_BTTV42250126100FGN.img.fidx"
2022-05-18T23:33:54+02:00: register chunks in 'ata-INTEL_SSDSC2BA100G3_BTTV42250126100FGN.img.fidx' as downloadable.
2022-05-18T23:34:14+02:00: GET /download
2022-05-18T23:34:14+02:00: download "/mnt/pbs/host/HypervisorBackup/2021-10-10T16:17:06Z/ata-INTEL_SSDSC2BA100G3_BTTV417303VG100FGN.img.fidx"
2022-05-18T23:34:14+02:00: register chunks in 'ata-INTEL_SSDSC2BA100G3_BTTV417303VG100FGN.img.fidx' as downloadable.
2022-05-18T23:34:14+02:00: GET /download
2022-05-18T23:34:14+02:00: download "/mnt/pbs/host/HypervisorBackup/2021-10-10T16:17:06Z/client.log.blob"
2022-05-18T23:34:14+02:00: GET /download: 404 Not Found: open file "/mnt/pbs/host/HypervisorBackup/2021-10-10T16:17:06Z/client.log.blob" failed - not found
2022-05-18T23:34:14+02:00: TASK ERROR: connection error: not connected
Looks like the sync can read all the chunks but fails at the client.log.blob for each snapshot.
I guess I got some permission problem? I'm doing the sync using a token that got the permissions "Datastore.Backup" and "Datastore.Prune" for "/", "/access", "/datastore", "/remote", "/system". All groups are owned by that token. "Local Owner" at the sync job is that token too. Also used that token for the "Remote" credentials. All source snapshots are verified recently and ok.
If there is no easy answer to fix this I will open a new thread for it so this threads doesn't go too much offtopic.