Hello!
I have 8 node cluster PVE 6.0.4 with Nautilus Ceph.
On each node, there is 1 OSD.
Ceph cluster usage is about 48%
Ceph config is
I did add 2 OSD and remove 1 at the same time on one node.
67 pgs becomes inactive and all VM's now offline from the outside, some of them console does not works, no migration possible.
What is wrong? How to avoid such kind of issues in future?
I have 8 node cluster PVE 6.0.4 with Nautilus Ceph.
On each node, there is 1 OSD.
Ceph cluster usage is about 48%
Ceph config is
Code:
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 10.200.201.0/22
fsid = ede0d6ae-81ec-4137-a918-5daf79ae0ff2
mon allow pool delete = true
osd journal size = 5120
osd pool default min size = 2
osd pool default size = 3
public network = 10.200.201.0/22
mon_host = 10.200.201.73 10.200.201.74 10.200.201.76
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
I did add 2 OSD and remove 1 at the same time on one node.
67 pgs becomes inactive and all VM's now offline from the outside, some of them console does not works, no migration possible.
What is wrong? How to avoid such kind of issues in future?