As you have created a new cluster (id 9c9daac0-736e-4dc1-8380-e6a3fa7d2c23) and the OSDs on disk belong to the old cluster (id c3c25528-cbda-4f9b-a805-583d16b93e8f) you cannot just add them.
Have you read through the procedure to recover the MON db from the copies stored on the OSDs...
Either the VM is using a DHCP client, then the clone will get a different IP configuration from the DHCP server because it gets a different MAC address.
Or you build a VM template that includes cloud-init and clone your VMs from that. Then you can set a static IP configuration from the Proxmox...
The root password field in /etc/shadow should never be empty.
Put an x or an exclamation mark (just not a valid hash value) in it to disable password based logins for the account.
This is nothing new. A blank password field allows login with any password since the dawn of Unix.
My 2c:
Do not use the 100G network for the Ceph cluster network but instead for the public network. No need for a separate cluster network here.
Use 2x 10G for Proxmox management and VM migration.
Use the other 2x 10G for VM guest traffic.
Use the remaining 1G ports for additional corossnc...
It really depends. If the cluster is large enough to spread over several racks with rack being the failure zone you could argue that you do not need redundant top of rack switches. The loss of a whole rack can then be easily mitigated.
But yes, in small clusters network redundancy is a must.
Nein, die dreifache Replikation bezieht sich auf den Pool und die Objekte darin.
OSDs kann ein Ceph-Cluster tausende haben. Die Objektkopien werden gleichmäßig auf sie verteilt.
Sehr geringe Latenz ist bei Ceph nur schwer zu erreichen, gerade bei nur 3 Knoten.
Warum nicht in allen 9 Knoten OSDs ausrollen?
Snapshots sind problemlos möglich.