Storage Replication with two ZFS Datasets

MH_MUC

Well-Known Member
May 24, 2019
67
6
48
37
Hi,

I just created a cluster and I want to replicate the storage for basic HA-functions.
How can I set the destination storage? I have a setup with 2 SSDs (ZFS mirror) as system-disks and 2 SSDs which I also setup as mirrored ZFS.

The problem is that the replication ends up on the small system disks. How can I change that?

Thank you!
 
For replication the zpool has to have the same name on both nodes. Is this the case for your setup?
Please provide the output of `pveversion -v` and `cat /etc/pve/storage.cfg`.
 
No that is the problem.
Node 1 has just 2 disks and therefore the local-zfs.

Is there a way to change the name of the zfs dataset on node B without reinstalling?

Thank yoU!

Code:
root@server40:/home/max# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.124-1-pve)
pve-manager: 6.4-11 (running version: 6.4-11/28d576c2)
pve-kernel-5.4: 6.4-4
pve-kernel-helper: 6.4-4
pve-kernel-5.4.124-1-pve: 5.4.124-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.10-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-5
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.2-4
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1

storage.cfg
Code:
cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,vztmpl,iso

zfspool: local-zfs
        pool rpool/data
        content rootdir,images
        sparse 1

pbs: ded-pbs
        datastore ded-backup
        server 192.168.22.1
        content backup
        encryption-key ec:d8:3f:18:66:ef:de:40:2b:fe:78:e0:09:79:13:6c:b2:3b:c6:44:8f:a5:58:99:d2:a6:5b:cc:e3:fc:bc:46
        fingerprint d4:be:82:50:21:de:68:eb:81:43:6b:fa:49:1a:d2:63:52:74:a3:65:cb:f8:10:e1:fd:c5:e1:b3:99:22:f0:02
        nodes server20
        prune-backups keep-all=1
        username dedbackup@pbs

zfspool: server40-storage-zfs
        pool server40-storage-zfs
        content rootdir,images
        mountpoint /server40-storage-zfs
        nodes server40
 
You shouldn't use HA in your wording in this case, since ZFS doesn't provide that. This is VM/CT replication, and not anything more.
 
As it is the root pool, it's rather difficult to rename it.
Typically you rename a pool by exporting it with `zpool export <pool>` and then importing the pool under a different name `zpool import <pool> <new-name>`. For this to work, the pool can't be in use.
 
Thank you. So I guess the easiest way to handle this is to reinstall the node with the 2 large disks as main pool....
Thank you
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!