Hi there
I've a three equal nodes to run proxomx cluster each has 4 OSDs used in a ceph storage.
PRX01, PRX02 and PRX03
When it comes to an update I sometimes have to reboot a node especially in case a kernel update was involved.
So I set the node into maintenance mode to let the VMs migrate to another node first.
After all VMs have been migratated I doing a reboot of these node.
Once the node is up again, I disable maintenance mode for this node and wait until the VMs migrated back until I proceed with the other nodes on by on.
So far everything went fine.
Only in case I need to reboot the node PRX01something goes terribly wrong.
The whole ceph cluster becomes unavailabel until reboot has finisched.
Does anyone have an idea why?
What info from my configuration do you need to help me?
I've a three equal nodes to run proxomx cluster each has 4 OSDs used in a ceph storage.
PRX01, PRX02 and PRX03
When it comes to an update I sometimes have to reboot a node especially in case a kernel update was involved.
So I set the node into maintenance mode to let the VMs migrate to another node first.
After all VMs have been migratated I doing a reboot of these node.
Once the node is up again, I disable maintenance mode for this node and wait until the VMs migrated back until I proceed with the other nodes on by on.
So far everything went fine.
Only in case I need to reboot the node PRX01something goes terribly wrong.
The whole ceph cluster becomes unavailabel until reboot has finisched.
Does anyone have an idea why?
What info from my configuration do you need to help me?