Hi all,
I have a 3 node ceph cluster, consisting of HPE Gen9 servers. It's been running well since I set it up and I really enjoy the "no single point of failure" feature.
Now, during the installation I was using some S3700 100gb drives for boot zfs mirrors, however for one of the hosts, I only had one at the time. Now I want to add a second drive and convert the zfs pool to a mirror. Got it all working, but wanted to reboot the host in order to check that it can also boot with the second drive. Server boots fine but after 2-3 minutes at the proxmox login console, all my ceph services goes down.
Only way to get it back up is to either turn off the host or reboot it again without the newly added drive. Now, the new drive goes into port 2, whereas the ceph drives are plugged into 3-8. I suspect that the new drive changes the order of the drives (sda,b,c etc) and that ceph then falsely believes it's a ceph drive. However obviously it's not and ceph grinds to a halt. Now, the data and everything is secure, I had no issues there. It is however a little worrying. Tried to google around but couldn't really find anything.
Anyone tried this before? Where would I have to change the ceph config? I can imagine that each osd maps to a /dev/sdX and I suppose I need to find that file and change it.
I have a 3 node ceph cluster, consisting of HPE Gen9 servers. It's been running well since I set it up and I really enjoy the "no single point of failure" feature.
Now, during the installation I was using some S3700 100gb drives for boot zfs mirrors, however for one of the hosts, I only had one at the time. Now I want to add a second drive and convert the zfs pool to a mirror. Got it all working, but wanted to reboot the host in order to check that it can also boot with the second drive. Server boots fine but after 2-3 minutes at the proxmox login console, all my ceph services goes down.
Only way to get it back up is to either turn off the host or reboot it again without the newly added drive. Now, the new drive goes into port 2, whereas the ceph drives are plugged into 3-8. I suspect that the new drive changes the order of the drives (sda,b,c etc) and that ceph then falsely believes it's a ceph drive. However obviously it's not and ceph grinds to a halt. Now, the data and everything is secure, I had no issues there. It is however a little worrying. Tried to google around but couldn't really find anything.
Anyone tried this before? Where would I have to change the ceph config? I can imagine that each osd maps to a /dev/sdX and I suppose I need to find that file and change it.