Search results

  1. L

    pveceph osd create /dev/sda fails

    Hi all Prologue: I had a unresponsive node (let's call it #6) which I could ping; the node's osd was up and in; however I could not ssh into it (err: "broken pipe" directly after entering the password). So i turned it off; then on. It booted, however it's osd did not start Next I updated all...
  2. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    Alwin, thank you so much for assisting on this - I have my OSDs up and running again. So the only thing I needed to do was a find /usr -name '*.pyc' -delete At first I tried to hunt down the specific __init__.pyc files all over the paths - until I felt confident enough to dispose them all...
  3. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    Sry, need to stay naggin' you about this... I indeed stumbled over the linked stackoverflow thread - however I don't quite understand how to fix it. Recap: already a simple 'ceph-volume' (without arguments) results in the same "ValueError..." whereas on the other nodes I get the "Available...
  4. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    Well, I did, and it said "1 new installed" (ie ceph) - however: Still same issue
  5. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    oops: Just noticed: All nodes report ceph: 14.2.9-pve1 whereas the affected node has no such line in pveversion -v [EDIT] However, in the UI/Ceph/OSD the node shows Version 14.2.9 (as all others)
  6. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    That's at first glance identical to the other nodes: proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve) pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754) pve-kernel-5.4: 6.2-4 pve-kernel-helper: 6.2-4 pve-kernel-5.3: 6.1-6 pve-kernel-5.0: 6.0-11 pve-kernel-5.4.44-2-pve: 5.4.44-2...
  7. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    OK, tried apt remove ceph-osd; then apt install ceph-osd, rebooted - same error :-(
  8. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    Hi Alwin ...appreciate your involvment; all packages report to be up-do-date (no update action after 'apt update' respectively 'apt dist-upgrade'). PVE and Ceph versions are identical to the remaining nodes (see initial post), python --version gives 2.7.16 on all nodes. pveceph osd create...
  9. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    Pravednik, sure I rebooted numerous times :-( [EDIT]: I also rebooted the other nodes - just in case it would make a difference...
  10. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    Hi Pravednik, I tried - creating a new GPT on each drive; then - removed all Partition tables from the drives ...same issue. Following up on Alwin's hint, vgdisplay shows one VG named 'pve' (no others) - is that the one I need to remove?
  11. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    Hi all, esp Alwin LV means logical volume, right? Checking with ceph-volume lvm list returns basically the same error: Traceback (most recent call last): File "/usr/sbin/ceph-volume", line 6, in <module> from pkg_resources import load_entry_point File...
  12. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    Hi all My cluster consists of 6 nodes with 3 OSDs each (18 OSDs total), pve 6.2-6 and ceph 14.2.9. BTW, it's been up and running fine for 7 months now and went through all updates flawlessly so far. However, after rebooting the nodes one after the other upon updating to 6.2-6, the 3 OSDs on...