Dominik, thanks for the hint
I've already tried do so but was facing errors and warnings in syslog related to rbd-mirror
From my perspective RBD mirroring solution fro PVE Wiki only suitable for journal mirroring and not for image one.
It would be extremely useful if Proxmox team will extend...
Any plans to integrate ceph replication (RBD mirroring) functionality into GUI? (with both snapshot and journaling modes)
Current wiki tutorial (https://pve.proxmox.com/wiki/Ceph_RBD_Mirroring) covers only journaling one and not fully suitable for recent Pacific ceph distro(
In PVE Wiki (https://pve.proxmox.com/wiki/Ceph_RBD_Mirroring) written
Could anyone advice how to extend one-way mirroring to two-ways with respect to original PVE Wiki howto?
Is it enough to install rbd-mirror in master (source) ? If so is it enough to install on one node in source CEPH...
Do I understand you correctly that two way mirroring requres installing rdb-mirror deamon on both sides? (muster and backup cluster)
However in PVE WIki is clearly written:
rbd-mirror installed on the backup cluster ONLY (apt install rbd-mirror).
With PVE 6.4 I still get
Yeap, I did some progress indeed
Unfortunately, I didn't manage to find out what caused "Device busy" - my assumption is that it somehow related to ZFS import (scan?) procedure that occurs on PVE (OS) start up (all the disks were a part of another ZFS pool from different storage without...
I'm facing an issue with creating ZFS pool with dm-mappers (clean 6.3 PVE)
I have HP gen8 server with dual port HBA connected with two SAS cables to HP D3700 and dual port SAS SSD disks SAMSUNG 1649a
I've installed multipath-tools and changed multiapth.conf accordantly ...
It's not sctually correct!
If you set zfs_arc_min to zfs_arc_max it does not use zfs_arc_min as zfs_arc_max!
It sets zfs_arc_min to desired value and ignores value for zfs_arc_max (so it's kept as default - half of RAM)
Unfortunately, all the affected VMs were from production environments and had to be fixed ASAP.
Hopefully, I've managed to reproduce this issue on one of our client's test system and here the information I've collected so far:
root@pve:~# dumpe2fs $(mount | grep 'on \/ ' | awk...