I would like to enable replication on a second node (repurposed thin client) I've added to my cluster.
On PVE1, I have my VMs on storage location "primary-zfs" found on a separate disk from boot disk, with pool called "prime-pool". The default storage "local-zfs" is available on "rpool" and unused at the moment.
On PVE2, I have a default setup on a single disk, "local-zfs" on "rpool". This is a repurposed thin client with a single SSD disk.
I know that I need to have the same naming scheme for replication to work.
I've been able to remove "local-zfs" from PVE2 within the GUI by modifying the "local-zfs" storage in Datacenter, excluding PVE2 from target nodes. But how do I rename "rpool" or delete & recreate pool to "prime-pool", so that I can extend "primary-zfs" to PVE2 node?
On PVE1, I have my VMs on storage location "primary-zfs" found on a separate disk from boot disk, with pool called "prime-pool". The default storage "local-zfs" is available on "rpool" and unused at the moment.
On PVE2, I have a default setup on a single disk, "local-zfs" on "rpool". This is a repurposed thin client with a single SSD disk.
I know that I need to have the same naming scheme for replication to work.
I've been able to remove "local-zfs" from PVE2 within the GUI by modifying the "local-zfs" storage in Datacenter, excluding PVE2 from target nodes. But how do I rename "rpool" or delete & recreate pool to "prime-pool", so that I can extend "primary-zfs" to PVE2 node?