Proxmox Backup Server 2.2 available

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,205
1,513
164
South Tyrol/Italy
shop.proxmox.com
Is there already a GUI option (or CLI command) to change the namespace of existing backups?
Note that namespaces are really just directories on the datastore, so to move all groups from the root in a new namespace named foo you'd need to do:
Bash:
# change pwd to datastore root dir, e.g.:
cd /mnt/datastore
# create the namespace "foo" manually (or via gui, which doesn't needs the ns/ prefix)
mkdir -p ns/foo
chown -R backup:backup ns
# move groups of type vm, ct and host
mv vm ct host ns/foo
# or a deeper namespace "foo/bar"
mkdir -p ns/foo/ns/bar
chown -R backup:backup ns

Note, this works only in the same datastore, doing this between different datastore will cause breakage and must not be done this way! Use a normal sync instead then.
 
Last edited:
  • Like
Reactions: hanru

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
7,471
1,392
164
Ok, so I was adding new PBS storages to the PVE pointing to my namespaces I created there. I created for example a namespace "MyNodeA/daily" would I need to put "Root/MyNodeA/daily" in the PVEs "namespace" field when creating a new PBS storage, if there is no namespace check yet?

no, just MyNodeA/daily
 

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
7,845
947
163
33
Vienna
Note that namespaces are really just directories on the datastore, so to move all groups from the root in a new namespace named foo you'd need to do:
Bash:
# change pwd to datastore root dir, e.g.:
cd /mnt/datastore
# create the namespace "foo" manually (or via gui, which doesn't needs the ns/ prefix)
mkdir -p ns/foo
# move groups of type vm, ct and host
mv vm ct host ns/foo
# or a deeper namespace "foo/bar"
mkdir -p ns/foo/ns/bar

Note, this works only in the same datastore, doing this between different datastore will cause breakage and must not be done this way! Use a normal sync instead then.
just to add, make sure that those folders belong to the 'backup' user and group, e.g with 'chown'
 

Dunuin

Famous Member
Jun 30, 2020
6,023
1,396
144
Germany
no, just MyNodeA/daily
Ok, thats also what I thought and entered. But then the default text "Root" is a bit confusing.
and prune jobs with namespace support are on the roadmap
So namespace based prunes won't work right now? Not even when disabling retention in PBS for the datastore and when adding each namespace as its own PBS storage to PVE and setting the retention there for the namespaced PBS storage?
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,205
1,513
164
South Tyrol/Italy
shop.proxmox.com
Ok, thats also what I thought and entered. But then the default text "Root" is a bit confusing.

So namespace based prunes won't work right now? Not even when disabling retention in PBS for the datastore and when adding each namespace as its own PBS storage to PVE and setting the retention there for the namespaced PBS storage?
Currently they'll scheduled prune job from a datastore only prunes the root namespace, but this will be fixed in the next version released to cover all namespaces as starter. The manual prunes are namespace aware.
 
Last edited:

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
7,471
1,392
164
Ok, thats also what I thought and entered. But then the default text "Root" is a bit confusing.
leaving the namespace empty means using the Root namespace ;)
So namespace based prunes won't work right now? Not even when disabling retention in PBS for the datastore and when adding each namespace as its own PBS storage to PVE and setting the retention there for the namespaced PBS storage?
pruning works, both in the PBS GUI (when inside a namespace, that namespace will be pruned, when pruning a group, that group in whatever namespace it's in will be pruned) and on the PVE side (if a storage is configured to use a namespace, any pruning triggered by PVE will also be limited to that namespace). the only thing that is missing is adapting a datastore's scheduled prune job to be configurable per-NS (prune-jobs are currently configured in datastore.cfg and there can only be one per datastore accordingly - they need to be moved to a regular job like sync/verify, then they can be configured for single namespaces, or the whole datastore including all namespaces, or some sub-tree of the namespaces ;))
 
  • Like
Reactions: Dunuin

Dunuin

Famous Member
Jun 30, 2020
6,023
1,396
144
Germany
Currently they'll scheduled prune job from a datastore only prunes the root namespace, but this will be fixed in the next version released to cover all namespaces as starter. The manual prunes are namespace aware.
leaving the namespace empty means using the Root namespace ;)

pruning works, both in the PBS GUI (when inside a namespace, that namespace will be pruned, when pruning a group, that group in whatever namespace it's in will be pruned) and on the PVE side (if a storage is configured to use a namespace, any pruning triggered by PVE will also be limited to that namespace). the only thing that is missing is adapting a datastore's scheduled prune job to be configurable per-NS (prune-jobs are currently configured in datastore.cfg and there can only be one per datastore accordingly - they need to be moved to a regular job like sync/verify, then they can be configured for single namespaces, or the whole datastore including all namespaces, or some sub-tree of the namespaces ;))
So to summarized it:
Manual prunes in PVE and PBS will take namespaces into account and backup retention using PVE too if that PBS storage is using a namespace. just the datastores backup retention in PBS isn't working yet because it just prunes the root namespace yet?

Just want to make sure my short term daily backup job retention isn't pruning my long term weekly backups again.
 
Last edited:

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,205
1,513
164
South Tyrol/Italy
shop.proxmox.com
Manual prunes in PVE and PBS will take namespaces into account and backup retention using PVE too if that PBS storage is using a namespace. just the datastores backup retention in PBS isn't working yet because it just prunes the root namespace yet?
Yes.
 
  • Like
Reactions: Dunuin

Dunuin

Famous Member
Jun 30, 2020
6,023
1,396
144
Germany
to expand on what Thomas already posted - you obviously need to point your clients (including PVE) at the namespace. to actually "move" (copy) the backup groups and snapshots, you can use a sync job (with the remote pointing to the same PBS instance) or a one-off proxmox-backup-manager pull. when moving from datastore A root namespace to datastore A namespace foo, only the metadata (manifest and indices and so on) will be copied, the chunks will be re-used. when moving from datastore A to datastore B the chunks that are not already contained in B will of course be copied as well, no matter whether namespaces are involved at either side ;).
Did that and moved everything from datastore "PBS_DS1" namespace "Root" (stores my weekly/monthly/annualy + protected manual stop mode backups) to datastore "PBS_DS1" namespace "MainCluster/Weekly".

Also moved everything from datastore "PBS_DS2" namespace "Root" (stores my daily snapshot mode backups) to datastore "PBS_DS1" namespace "MainCluster/Daily".

Used a "local" sync job for that. If anyone else needs to do this:
1.) you need to create a "Remote" first. Just add the Fingerprint, IP, and credentials of your local PBS as a new "Remote" and call it something like "Local". Would be nice if the "add sync job" GUI would already offer "Local" as a Remote PBS without needing to create such a Remote first. I guess PBS itself should know its own IP, fingerprint and so on (but maybe not the password?).
2.) use "max-depth" of 0 so you only sync the groups of the Root namespace and not also the child namespaces if you want to sync within the same datastore
3.) keep in mind that the synced groups loose the comment/note and snapshots loose the protected flag when doing a sync

But now I got two questions:
1.) Now I got every group/snapshot twice. Once in the Root namespaces and once in my custom namespaces. I guess I can just delete all groups in the root namespaces now without loosing any chunks so that my groups/snapshots are moved and not copied (except for the backups of cause that I synced from "PBS_DS2" to "PBS_DS1" where I don't want any chunks in "PBS_DS2" anymore so I can delete that datastore)?
2.) I also got a third namespace "MainCluster/Manual" where I want to store my manual backups. Right now they are mixed with my automatic weekly backups in the MainCluster/Weekly namespace because previously I stored them together with the weekly backups because I don't wanted to create a third datastore for them (to don't waste even more space). The Sync Job got group filters. Filtering by groups won't help much as the same groups contain "automatic weekly" and "manual" snapshots. Would it be possible to use the regex group filters to only sync specific snapshots like "ct/136/2022-02-23T19:59:59Z" or is it only possible to regex groups (like the name "group filters" would imply)? All my manual snapshot got the protected flag (atleast in the root nameset where that flag wasn't removed) but I guess its not possible to only/not sync snapshots that are protected.
I guess the easiest way would be to sync my (not deleted yet) snapshots again from datastore "PBS_DS1" namespace "Root" to "PBS_DS1" namespace "MainCluster/Manual" so I got the same groups+snapshots in "MainCluster/Manual" and "MainCluster/Weekly" and then just manually delete all weekly backups from the "MainCluster/Manual" namespace and all manual backups from the "MainCluster/Weekly" namespace?

Edit:
Both Sync Jobs finished with "ok" and looks like all groups and snapshots got synced but the task log is full of errors like these:
Code:
2022-05-18T23:59:29+02:00: starting new backup reader datastore 'PBS_DS2': "/mnt/pbs2"
2022-05-18T23:59:29+02:00: protocol upgrade done
2022-05-18T23:59:29+02:00: GET /download
2022-05-18T23:59:29+02:00: download "/mnt/pbs2/vm/107/2022-05-16T03:34:40Z/index.json.blob"
2022-05-18T23:59:29+02:00: GET /download
2022-05-18T23:59:29+02:00: download "/mnt/pbs2/vm/107/2022-05-16T03:34:40Z/qemu-server.conf.blob"
2022-05-18T23:59:29+02:00: GET /download
2022-05-18T23:59:29+02:00: download "/mnt/pbs2/vm/107/2022-05-16T03:34:40Z/fw.conf.blob"
2022-05-18T23:59:29+02:00: GET /download
2022-05-18T23:59:29+02:00: download "/mnt/pbs2/vm/107/2022-05-16T03:34:40Z/drive-scsi1.img.fidx"
2022-05-18T23:59:29+02:00: register chunks in 'drive-scsi1.img.fidx' as downloadable.
2022-05-18T23:59:29+02:00: GET /download
2022-05-18T23:59:29+02:00: download "/mnt/pbs2/vm/107/2022-05-16T03:34:40Z/drive-scsi0.img.fidx"
2022-05-18T23:59:29+02:00: register chunks in 'drive-scsi0.img.fidx' as downloadable.
2022-05-18T23:59:29+02:00: GET /download
2022-05-18T23:59:29+02:00: download "/mnt/pbs2/vm/107/2022-05-16T03:34:40Z/client.log.blob"
2022-05-18T23:59:29+02:00: TASK ERROR: connection error: not connected
Code:
2022-05-18T23:33:54+02:00: starting new backup reader datastore 'PBS_DS1': "/mnt/pbs"
2022-05-18T23:33:54+02:00: protocol upgrade done
2022-05-18T23:33:54+02:00: GET /download
2022-05-18T23:33:54+02:00: download "/mnt/pbs/host/HypervisorBackup/2021-10-10T16:17:06Z/index.json.blob"
2022-05-18T23:33:54+02:00: GET /download
2022-05-18T23:33:54+02:00: download "/mnt/pbs/host/HypervisorBackup/2021-10-10T16:17:06Z/ata-INTEL_SSDSC2BA100G3_BTTV42250126100FGN.img.fidx"
2022-05-18T23:33:54+02:00: register chunks in 'ata-INTEL_SSDSC2BA100G3_BTTV42250126100FGN.img.fidx' as downloadable.
2022-05-18T23:34:14+02:00: GET /download
2022-05-18T23:34:14+02:00: download "/mnt/pbs/host/HypervisorBackup/2021-10-10T16:17:06Z/ata-INTEL_SSDSC2BA100G3_BTTV417303VG100FGN.img.fidx"
2022-05-18T23:34:14+02:00: register chunks in 'ata-INTEL_SSDSC2BA100G3_BTTV417303VG100FGN.img.fidx' as downloadable.
2022-05-18T23:34:14+02:00: GET /download
2022-05-18T23:34:14+02:00: download "/mnt/pbs/host/HypervisorBackup/2021-10-10T16:17:06Z/client.log.blob"
2022-05-18T23:34:14+02:00: GET /download: 404 Not Found: open file "/mnt/pbs/host/HypervisorBackup/2021-10-10T16:17:06Z/client.log.blob" failed - not found
2022-05-18T23:34:14+02:00: TASK ERROR: connection error: not connected
Looks like the sync can read all the chunks but fails at the client.log.blob for each snapshot.
I guess I got some permission problem? I'm doing the sync using a token that got the permissions "Datastore.Backup" and "Datastore.Prune" for "/", "/access", "/datastore", "/remote", "/system". All groups are owned by that token. "Local Owner" at the sync job is that token too. Also used that token for the "Remote" credentials. All source snapshots are verified recently and ok.
If there is no easy answer to fix this I will open a new thread for it so this threads doesn't go too much offtopic.
 
Last edited:
Jan 16, 2020
79
11
13
25
Hello,

I moved my backups to a namespace (move the folders). But when I run a backup, we get the following error:
Code:
ERROR: VM 121 qmp command 'backup' failed - Parameter 'backup-ns' is unexpected

pveversions:
Code:
PVE-Manager-Version pve-manager/7.2-4/ca9d43cc

proxmox-ve: 7.2-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.2-4 (running version: 7.2-4/ca9d43cc)
pve-kernel-5.15: 7.2-3
pve-kernel-helper: 7.2-3
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.15.30-2-pve: 5.15.30-3
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.13.13-4-pve: 4.13.13-35
pve-kernel-4.10.17-2-pve: 4.10.17-20
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: not correctly installed
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.1-8
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-1
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-2
libpve-storage-perl: 7.2-4
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.1-1
proxmox-backup-file-restore: 2.2.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-2
pve-ha-manager: 3.3-4
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-7
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1
 
Last edited:

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,205
1,513
164
South Tyrol/Italy
shop.proxmox.com
ERROR: VM 121 qmp command 'backup' failed - Parameter 'backup-ns' is unexpected
see:
Exactly, and for VMs you need to either do a shutdown start cycle (or reboot via PVE web interface) them or migrate them to an updated host, so that they run with a QEMU and libproxmox-backup-qemu that supports namespaces.
 
  • Like
Reactions: MarvinE

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
7,471
1,392
164
But now I got two questions:
1.) Now I got every group/snapshot twice. Once in the Root namespaces and once in my custom namespaces. I guess I can just delete all groups in the root namespaces now without loosing any chunks so that my groups/snapshots are moved and not copied (except for the backups of cause that I synced from "PBS_DS2" to "PBS_DS1" where I don't want any chunks in "PBS_DS2" anymore so I can delete that datastore)?
yes - as always, deleting / pruning / forgetting a snapshot only removes the metadata/indices, the chunks will only be removed by GC if there are no more references (across all namespaces).
2.) I also got a third namespace "MainCluster/Manual" where I want to store my manual backups. Right now they are mixed with my automatic weekly backups in the MainCluster/Weekly namespace because previously I stored them together with the weekly backups because I don't wanted to create a third datastore for them (to don't waste even more space). The Sync Job got group filters. Filtering by groups won't help much as the same groups contain "automatic weekly" and "manual" snapshots. Would it be possible to use the regex group filters to only sync specific snapshots like "ct/136/2022-02-23T19:59:59Z" or is it only possible to regex groups (like the name "group filters" would imply)? All my manual snapshot got the protected flag (atleast in the root nameset where that flag wasn't removed) but I guess its not possible to only/not sync snapshots that are protected.
I guess the easiest way would be to sync my (not deleted yet) snapshots again from datastore "PBS_DS1" namespace "Root" to "PBS_DS1" namespace "MainCluster/Manual" so I got the same groups+snapshots in "MainCluster/Manual" and "MainCluster/Weekly" and then just manually delete all weekly backups from the "MainCluster/Manual" namespace and all manual backups from the "MainCluster/Weekly" namespace?
group filters only operate on the group level so far
Edit:
Both Sync Jobs finished with "ok" and looks like all groups and snapshots got synced but the task log is full of errors like these:
Code:
2022-05-18T23:59:29+02:00: starting new backup reader datastore 'PBS_DS2': "/mnt/pbs2"
2022-05-18T23:59:29+02:00: protocol upgrade done
2022-05-18T23:59:29+02:00: GET /download
2022-05-18T23:59:29+02:00: download "/mnt/pbs2/vm/107/2022-05-16T03:34:40Z/index.json.blob"
2022-05-18T23:59:29+02:00: GET /download
2022-05-18T23:59:29+02:00: download "/mnt/pbs2/vm/107/2022-05-16T03:34:40Z/qemu-server.conf.blob"
2022-05-18T23:59:29+02:00: GET /download
2022-05-18T23:59:29+02:00: download "/mnt/pbs2/vm/107/2022-05-16T03:34:40Z/fw.conf.blob"
2022-05-18T23:59:29+02:00: GET /download
2022-05-18T23:59:29+02:00: download "/mnt/pbs2/vm/107/2022-05-16T03:34:40Z/drive-scsi1.img.fidx"
2022-05-18T23:59:29+02:00: register chunks in 'drive-scsi1.img.fidx' as downloadable.
2022-05-18T23:59:29+02:00: GET /download
2022-05-18T23:59:29+02:00: download "/mnt/pbs2/vm/107/2022-05-16T03:34:40Z/drive-scsi0.img.fidx"
2022-05-18T23:59:29+02:00: register chunks in 'drive-scsi0.img.fidx' as downloadable.
2022-05-18T23:59:29+02:00: GET /download
2022-05-18T23:59:29+02:00: download "/mnt/pbs2/vm/107/2022-05-16T03:34:40Z/client.log.blob"
2022-05-18T23:59:29+02:00: TASK ERROR: connection error: not connected
Code:
2022-05-18T23:33:54+02:00: starting new backup reader datastore 'PBS_DS1': "/mnt/pbs"
2022-05-18T23:33:54+02:00: protocol upgrade done
2022-05-18T23:33:54+02:00: GET /download
2022-05-18T23:33:54+02:00: download "/mnt/pbs/host/HypervisorBackup/2021-10-10T16:17:06Z/index.json.blob"
2022-05-18T23:33:54+02:00: GET /download
2022-05-18T23:33:54+02:00: download "/mnt/pbs/host/HypervisorBackup/2021-10-10T16:17:06Z/ata-INTEL_SSDSC2BA100G3_BTTV42250126100FGN.img.fidx"
2022-05-18T23:33:54+02:00: register chunks in 'ata-INTEL_SSDSC2BA100G3_BTTV42250126100FGN.img.fidx' as downloadable.
2022-05-18T23:34:14+02:00: GET /download
2022-05-18T23:34:14+02:00: download "/mnt/pbs/host/HypervisorBackup/2021-10-10T16:17:06Z/ata-INTEL_SSDSC2BA100G3_BTTV417303VG100FGN.img.fidx"
2022-05-18T23:34:14+02:00: register chunks in 'ata-INTEL_SSDSC2BA100G3_BTTV417303VG100FGN.img.fidx' as downloadable.
2022-05-18T23:34:14+02:00: GET /download
2022-05-18T23:34:14+02:00: download "/mnt/pbs/host/HypervisorBackup/2021-10-10T16:17:06Z/client.log.blob"
2022-05-18T23:34:14+02:00: GET /download: 404 Not Found: open file "/mnt/pbs/host/HypervisorBackup/2021-10-10T16:17:06Z/client.log.blob" failed - not found
2022-05-18T23:34:14+02:00: TASK ERROR: connection error: not connected
Looks like the sync can read all the chunks but fails at the client.log.blob for each snapshot
these are benign - the connection error is because the sync client drops the connection and the endpoint doesn't yet expect that (should be improved). downloading the log is just best effort - it doesn't have to be there at all (or yet), and unless your host backup scripts upload a log (after the backup) there won't be one ;)
I guess I got some permission problem? I'm doing the sync using a token that got the permissions "Datastore.Backup" and "Datastore.Prune" for "/", "/access", "/datastore", "/remote", "/system". All groups are owned by that token. "Local Owner" at the sync job is that token too. Also used that token for the "Remote" credentials. All source snapshots are verified recently and ok.
If there is no easy answer to fix this I will open a new thread for it so this threads doesn't go too much offtopic.
yeah, it might be a good idea to open a new thread if you have more in-depth questions about namespaces, syncing etc. that are specific to your setup :)
 
  • Like
Reactions: Dunuin

Dunuin

Famous Member
Jun 30, 2020
6,023
1,396
144
Germany
Moving snapshots/backup groups between namespaces of the same datastore is a bit overcomplicated (setting up sync tasks and so on) and changing owners of backup groups quite tedious because you can't do that in bulk and need to change the owner of each group individually.
Would be nice to see 2 two new "action" buttons in the datastores content tab at the namespace row in the future:
1.) change owner of all groups of a namspace
2.) move/copy all groups of a namespace to another namespace of the same datastore
 
Last edited:

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
7,471
1,392
164
Moving snapshots/backup groups between namespaces of the same datastore is a bit overcomplicated (setting up sync tasks and so on) and changing owners of backup groups quite tedious because you can't do that in bulk and need to change the owner of each group individually.
Would be nice to see 2 two new buttons in the datastores content tab at the namespace row in the future:
1.) change owner of all groups of a namspace
2.) move/copy all groups of a namespace to another namespace

you can set the 'new' owner as part of the sync job, that way you can do it all in one go.. but yeah, moving a group from one namespace to another should be pretty straight-forward (like @t.lamprecht indicated earlier, this is basically a mv TYPE/ID ns/NAMESPACE/TYPE/ID + locking to do it properly). bulk editing the owner would be a nice feature - trivial to workaround if you have shell access (find /path/to/ns -type f -name owner -exec ...), but would be nice together with other "bulk actions" on the GUI: https://bugzilla.proxmox.com/show_bug.cgi?id=2863
 
  • Like
Reactions: MarvinE and Dunuin

Jesus Blanco

Member
Apr 2, 2018
12
1
23
Hello,

I Have two PBS node in two diferent locations and every night I sync from one to another
I have update both servers to 2.2.1 and this night the sync jobs have been failed, this is the error:

2022-05-20T10:28:34+02:00: Starting datastore sync job 'PBS01:ds01:DS01::s-665551eb-83cd'
2022-05-20T10:28:34+02:00: sync datastore 'DS01' from 'PBS01/ds01'
2022-05-20T10:28:34+02:00: ----
2022-05-20T10:28:34+02:00: Syncing datastore ds01, root namespace into datastore DS01, root namespace
2022-05-20T10:28:34+02:00: Cannot sync datastore ds01, root namespace into datastore DS01, root namespace - sync namespace datastore DS01, root namespace failed - no permission to modify parent/datastore.
2022-05-20T10:28:34+02:00: TASK ERROR: sync failed with some errors.

Is mandatory to create namespaces to sync works? I can't update now my PVE nodes to 7.2, I have two cluster with 7.1-4 and 6.4-13 versions. I read in this post that in PVE versions older than 7.2 al backups will go to root namespace, so without update PVE nodes, even if you create namespaces, it will be empty so sync of new backups will not be did.

How can I repair my sync without use namespaces? It will be necesary to do a downgrade?

Thanks in advance!
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,205
1,513
164
South Tyrol/Italy
shop.proxmox.com
Is mandatory to create namespaces to sync works?
It's mandatory to create the top-level target namespace you want to sync too, as that one won't get auto-created. the ones below then will get synced automatically.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!