Hi,
I've done some major upgrades on a three node proxmox hyperconverged setup
If not, what would be the best way to get to a working state again?
What is to say about the current situation besides the detailed information below:
Current status Information
Versions
https://nopaste.debianforum.de/41274
proxmox status
https://nopaste.debianforum.de/41275
ceph status
ceph health detail
https://nopaste.debianforum.de/41276
I've done some major upgrades on a three node proxmox hyperconverged setup
- VM-Backups were made before.
- Proxmox 5.x to 6.x (via step by step guide from wiki). Upgrade was successfully. All nodes were rebooted afterwards. Status was perfect
- Ugrade from Ceph Luminous to Ceph Nautilus(via step by step guide from wiki). All nodes were rebooted afterwards. Status was perfect
- Upgrade from Ceph Nautilus to Ceph Octopus(via step by step guide from wiki). After that Ceph is no longer functioning.
If not, what would be the best way to get to a working state again?
What is to say about the current situation besides the detailed information below:
- "ceph osd status" command is no longer working. If I enter it, the command is stuck and can only be interrupted with Ctrl+C
- I restarted all osds on all nodes at the same time. I assume this was not good.
Current status Information
Versions
https://nopaste.debianforum.de/41274
proxmox status
https://nopaste.debianforum.de/41275
ceph status
Code:
ceph status
cluster:
id: bcfe05fc-6690-4743-850e-ff80837b7cdc
health: HEALTH_WARN
noout flag(s) set
4 osds down
2 hosts (8 osds) down
Reduced data availability: 129 pgs inactive
Degraded data redundancy: 179886/269829 objects degraded (66.667%), 129 pgs degraded, 129 pgs undersized
1 slow requests are blocked > 32 sec
1 slow ops, oldest one blocked for 1249 sec, osd.7 has slow ops
services:
mon: 3 daemons, quorum kvm10,kvm11,kvm12 (age 21m)
mgr: kvm10(active, since 20m), standbys: kvm11, kvm12
osd: 12 osds: 4 up, 8 in
flags noout
data:
pools: 2 pools, 129 pgs
objects: 89.94k objects, 335 GiB
usage: 1006 GiB used, 20 TiB / 21 TiB avail
pgs: 100.000% pgs not active
179886/269829 objects degraded (66.667%)
129 undersized+degraded+peered
ceph health detail
https://nopaste.debianforum.de/41276
Last edited: