But as VM/CT disk storage? Or do you have the installation ISOs there? If they are still mounted on the VM, than it needs them on the target side as well. Migration should still work for those VM/CT that do not use the cephfs storage.
Can you post the storage.cfg?
The point is not HA or shared storage. It is its combination on the same network interface. Whatever you do, especially for a HA cluster, you will need to guarantee low and stable latency on the Corosync links. You will not (or very very hard) achieve this with shared NIC ports.
If it is the case, then running a kernel prior 5.3 would fix the issue. But I don't believe it ATM.
I don't think it is that, since I tried with older server versions (jessie, stretch, DMS6.2) and it still worked.
But I found a commit that could give a hint. There is a initial UDP packet for...
That's message is from a running MON and doesn't prohibit it joining the other MONs.
Yes, even though they are the same for the MONs, the are different for the other services like MGR, OSD, MDS or clients. But anyway you will not need to copy those, since they are created by the MON on bostrapping.
Well, its simply the wrong hardware for a hyper-converged setup. And I don't recommend blades for this kind of setup.
But with that said, run VLANs on the single interfaces and bond the VLANs together. This way you can use active-backup with a bond-primary to separate the traffic onto the...
Mount on its own is not the issue reported. The showmount doesn't seem to work properly. And with that usually rpcinfo isn't able to connect either. The mount will try to negotiate the nfs version from 4 -> 3 -> 2, till it gives up. Once it finds one, the mount proceeds. Just check with mount...
Is the container storage a different pool?
No. Its one liner. Since rbd ls works, you get its output. Sort it and run the rbd info <image> in a loop through the sorted list. The command will hang when it encounters the faulty image.
node005 exists in the ceph.conf, but wasn't registered by the other MONs. The easiest is to try a pveceph destroy node005 and afterwards a pveceph create. Then hopefully the new MON starts working. If not, the log file /var/log/ceph/ceph-mon.node005.log should give some Clous.