Hi everybody,
I am not sure where to report this problem, but I thought it might be an interesting issue/bug.
I installed 3 PVE-nodes as a testcluster, but due to hardware contrains (missing sata cable), i used a RAID1 Softwareraid on node 1 for the OS-partition and ext4 single disks for the two other clusternodes OS-partitions. (Storage for ceph are on diffrente hdds)
After setting up the corosync cluster (initiated on node 1 with the softwareraid) I got the following Webconsole.
The cluster seems to assume every host has a btrfs-RAID as OS-Storage, independend which host in the cluster is used to display the webconsole.
Therefore the storages of the two node could not be used, via webui.
This could be a little bit limiting for setups with different hardware setups for the os-partition,
e.g. in cases where the cluster should benefit from new btfrs-features in an hardware upgrade path, when new hosts are added to an old cluster.
But I am not sure if this is a bug, or just not supported.
BR, Lucas
I am not sure where to report this problem, but I thought it might be an interesting issue/bug.
I installed 3 PVE-nodes as a testcluster, but due to hardware contrains (missing sata cable), i used a RAID1 Softwareraid on node 1 for the OS-partition and ext4 single disks for the two other clusternodes OS-partitions. (Storage for ceph are on diffrente hdds)
After setting up the corosync cluster (initiated on node 1 with the softwareraid) I got the following Webconsole.
The cluster seems to assume every host has a btrfs-RAID as OS-Storage, independend which host in the cluster is used to display the webconsole.
Therefore the storages of the two node could not be used, via webui.
This could be a little bit limiting for setups with different hardware setups for the os-partition,
e.g. in cases where the cluster should benefit from new btfrs-features in an hardware upgrade path, when new hosts are added to an old cluster.
But I am not sure if this is a bug, or just not supported.
BR, Lucas