Do I understand you correctly that two way mirroring requres installing rdb-mirror deamon on both sides? (muster and backup cluster)
However in PVE WIki is clearly written:
rbd-mirror installed on the backup cluster ONLY (apt install rbd-mirror).
With PVE 6.4 I still get
health: WARNING...
Yeap, I did some progress indeed
Unfortunately, I didn't manage to find out what caused "Device busy" - my assumption is that it somehow related to ZFS import (scan?) procedure that occurs on PVE (OS) start up (all the disks were a part of another ZFS pool from different storage without...
I'm facing an issue with creating ZFS pool with dm-mappers (clean 6.3 PVE)
I have HP gen8 server with dual port HBA connected with two SAS cables to HP D3700 and dual port SAS SSD disks SAMSUNG 1649a
I've installed multipath-tools and changed multiapth.conf accordantly ...
It's not sctually correct!
If you set zfs_arc_min to zfs_arc_max it does not use zfs_arc_min as zfs_arc_max!
It sets zfs_arc_min to desired value and ignores value for zfs_arc_max (so it's kept as default - half of RAM)
I can confirm that setting zfs_arc_min equal to zfs_arc_max breaks old behavior and reset upper limit to default half of RAM
Very painful - this was my default setup for years(
Seems you ignored quoted text and cut my response to your colleague from the context. I was replying him and saying that I was not able to check his suggestion and workaround at that moment
Unfortunately, all the affected VMs were from production environments and had to be fixed ASAP.
Hopefully, I've managed to reproduce this issue on one of our client's test system and here the information I've collected so far:
PVE host:
root@pve:~# dumpe2fs $(mount | grep 'on \/ ' | awk...
Same story on 2 different PVE hosts with Windows Server 2016 Standard and Windows Server 2019 Standard
Windows Server 2016 Standard
root@pve-node2:~# cat /etc/pve/qemu-server/204.conf
agent: 1
boot: c
bootdisk: scsi0
cores: 12
cpu: host,flags=+pcid;+spec-ctrl;+pdpe1gb;+hv-tlbflush
ide2...
We have been facing strange behavior with VLAN tagged connections inside VM with ifupdown2 package been installed on host. After removing this package from PVE and host reboot everything started working as before (and expected)
Shouldn't the following wiki tutorial be updated with respect to Ceph 15.x? After the upgrade main and backup PVE and CEPH clusters to 6.3/15.2.6 mirroring stopped working(
https://pve.proxmox.com/wiki/Ceph_RBD_Mirroring
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.