I have been working on a few STH articles on Proxmox VE 4.0. (e.g. http://www.servethehome.com/add-raid-1-mirrored-zpool-to-proxmox-ve/ and http://www.servethehome.com/proxmox...ceph-osd-option-being-unavailable-grayed-out/ as examples) Absolutely great job on 4.0. It is absolutely awesome how well the cluster is performing.
I did run into a minor issue with the test cluster. The 4-node cluster has 3x Intel Xeon D-1540 nodes and 1x Intel Xeon E5 V3 node (fmt-pve-01). All four were running Ceph. The "big" fmt-pve-01 node had a double Kingston V200 SSD 240GB failure within 72 hours which took out the ZFS mirror boot volume.
That leaves the other three nodes which can have a quorum active. I do have two more nodes ready to join, but I do not want to proceed and mess up the cluster further. With a no-Ceph cluster I would normally just remove the PVE node from the cluster. I would then install new boot drives and then I would re-join the node to the cluster. That is not too hard. What I am wondering/ worried about is the addition of Ceph to the cluster.
My questions are:
1. Do I need to do something to remove the node/ OSDs from the Ceph config before removing the node from the cluster? Or does Promox take care of the Ceph config when I pvecm delnode fmt-pve-01?
2. I do have two more nodes ready to join with additional disks. Would it be best to add these nodes to the Proxmox/ Ceph cluster before removing the first node?
Any tips would be appreciated! Thank you again.
Patrick
I did run into a minor issue with the test cluster. The 4-node cluster has 3x Intel Xeon D-1540 nodes and 1x Intel Xeon E5 V3 node (fmt-pve-01). All four were running Ceph. The "big" fmt-pve-01 node had a double Kingston V200 SSD 240GB failure within 72 hours which took out the ZFS mirror boot volume.
That leaves the other three nodes which can have a quorum active. I do have two more nodes ready to join, but I do not want to proceed and mess up the cluster further. With a no-Ceph cluster I would normally just remove the PVE node from the cluster. I would then install new boot drives and then I would re-join the node to the cluster. That is not too hard. What I am wondering/ worried about is the addition of Ceph to the cluster.
My questions are:
1. Do I need to do something to remove the node/ OSDs from the Ceph config before removing the node from the cluster? Or does Promox take care of the Ceph config when I pvecm delnode fmt-pve-01?
2. I do have two more nodes ready to join with additional disks. Would it be best to add these nodes to the Proxmox/ Ceph cluster before removing the first node?
Any tips would be appreciated! Thank you again.
Patrick