During testing My HA-Ceph installation I received this error when trying to migrate to my 3rd node. It Migrated but stopped. I was then able to migrate back to one of the other 2 nodes and restart Be gentile I am very New to HA-Ceph. lol
Any insight you can provide would be great.
Also no problems migrating 1st to 2nd and back.....Just any going to 3rd. No error was present until I tried migrating node 1 to 3 or 2 to 3.
Thanks in advance for ANY time giving a response.
I do understand for the time being I could make a group pool and restrict migration between 1-2, but that defeat the purpose of a total HA cluster
root@bass:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 4.06088 root default
-3 2.00130 host bass
0 nvme 2.00130 osd.0 up 1.00000 1.00000
-7 1.81940 host daygo
1 ssd 1.81940 osd.1 up 1.00000 1.00000
-10 0.24019 host york
2 ssd 0.24019 osd.2 up 1.00000 1.00000
root@bass:~# ceph -s
cluster:
id: 241c8887-31d8-44f4-b252-c6e4eb5a14ed
health: HEALTH_WARN
Degraded data redundancy: 39/3354 objects degraded (1.163%), 4 pgs degraded, 4 pgs undersized
services:
mon: 3 daemons, quorum bass,daygo,york (age 19m)
mgr: bass(active, since 3h), standbys: daygo, york
osd: 3 osds: 3 up (since 2h), 3 in (since 2h); 1 remapped pgs
data:
pools: 2 pools, 129 pgs
objects: 1.12k objects, 4.2 GiB
usage: 220 GiB used, 3.8 TiB / 4.1 TiB avail
pgs: 39/3354 objects degraded (1.163%)
7/3354 objects misplaced (0.209%)
124 active+clean
4 active+undersized+degraded
1 active+clean+remapped
io:
client: 0 B/s rd, 85 B/s wr, 0 op/s rd, 0 op/s wr
Any insight you can provide would be great.
Also no problems migrating 1st to 2nd and back.....Just any going to 3rd. No error was present until I tried migrating node 1 to 3 or 2 to 3.
Thanks in advance for ANY time giving a response.
I do understand for the time being I could make a group pool and restrict migration between 1-2, but that defeat the purpose of a total HA cluster
root@bass:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 4.06088 root default
-3 2.00130 host bass
0 nvme 2.00130 osd.0 up 1.00000 1.00000
-7 1.81940 host daygo
1 ssd 1.81940 osd.1 up 1.00000 1.00000
-10 0.24019 host york
2 ssd 0.24019 osd.2 up 1.00000 1.00000
root@bass:~# ceph -s
cluster:
id: 241c8887-31d8-44f4-b252-c6e4eb5a14ed
health: HEALTH_WARN
Degraded data redundancy: 39/3354 objects degraded (1.163%), 4 pgs degraded, 4 pgs undersized
services:
mon: 3 daemons, quorum bass,daygo,york (age 19m)
mgr: bass(active, since 3h), standbys: daygo, york
osd: 3 osds: 3 up (since 2h), 3 in (since 2h); 1 remapped pgs
data:
pools: 2 pools, 129 pgs
objects: 1.12k objects, 4.2 GiB
usage: 220 GiB used, 3.8 TiB / 4.1 TiB avail
pgs: 39/3354 objects degraded (1.163%)
7/3354 objects misplaced (0.209%)
124 active+clean
4 active+undersized+degraded
1 active+clean+remapped
io:
client: 0 B/s rd, 85 B/s wr, 0 op/s rd, 0 op/s wr