Search results

  1. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    Well, I did, and it said "1 new installed" (ie ceph) - however: Still same issue
  2. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    oops: Just noticed: All nodes report ceph: 14.2.9-pve1 whereas the affected node has no such line in pveversion -v [EDIT] However, in the UI/Ceph/OSD the node shows Version 14.2.9 (as all others)
  3. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    That's at first glance identical to the other nodes: proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve) pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754) pve-kernel-5.4: 6.2-4 pve-kernel-helper: 6.2-4 pve-kernel-5.3: 6.1-6 pve-kernel-5.0: 6.0-11 pve-kernel-5.4.44-2-pve: 5.4.44-2...
  4. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    OK, tried apt remove ceph-osd; then apt install ceph-osd, rebooted - same error :-(
  5. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    Hi Alwin ...appreciate your involvment; all packages report to be up-do-date (no update action after 'apt update' respectively 'apt dist-upgrade'). PVE and Ceph versions are identical to the remaining nodes (see initial post), python --version gives 2.7.16 on all nodes. pveceph osd create...
  6. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    Pravednik, sure I rebooted numerous times :-( [EDIT]: I also rebooted the other nodes - just in case it would make a difference...
  7. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    Hi Pravednik, I tried - creating a new GPT on each drive; then - removed all Partition tables from the drives ...same issue. Following up on Alwin's hint, vgdisplay shows one VG named 'pve' (no others) - is that the one I need to remove?
  8. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    Hi all, esp Alwin LV means logical volume, right? Checking with ceph-volume lvm list returns basically the same error: Traceback (most recent call last): File "/usr/sbin/ceph-volume", line 6, in <module> from pkg_resources import load_entry_point File...
  9. L

    [SOLVED] OSDs fail on on one node / cannot re-create

    Hi all My cluster consists of 6 nodes with 3 OSDs each (18 OSDs total), pve 6.2-6 and ceph 14.2.9. BTW, it's been up and running fine for 7 months now and went through all updates flawlessly so far. However, after rebooting the nodes one after the other upon updating to 6.2-6, the 3 OSDs on...