Search results

  1. K

    Ceph upgrade to Nautilus - error mount point and no "uuid"

    root@node002:/dev/disk/by-partuuid# cat /etc/ceph/osd/2-9fef792d-e0fd-4d9f-9b99-3040e636cf16.json { "active": "ok", "block": { "path": "/dev/disk/by-partuuid/8755dd67-fee5-46f2-b0eb-e9fd75725722", "uuid": "8755dd67-fee5-46f2-b0eb-e9fd75725722" }, "block_uuid"...
  2. K

    Ceph upgrade to Nautilus - error mount point and no "uuid"

    Dear all, I am busy with upgrading Ceph to Nautilus https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus But I get this error when running the ceph-volume simple scan root@node002:/dev/disk/by-partuuid# ceph-volume simple scan /dev/sdc1 Running command: /sbin/cryptsetup status /dev/sdc1...
  3. K

    Failed to start Ceph disk activation: /dev/sd* and OSD's down after Proxmox upgrade to v6

    Ok I found the exact problem and that is after the upgrade to Proxmox 6 Debian renamed my network interface. I had to change to old names into the new ones (ens2f0 ens2f1) in /etc/network/interfaces, restarted the network interfaces and everything is up and running on the latest 5 kernel.
  4. K

    Failed to start Ceph disk activation: /dev/sd* and OSD's down after Proxmox upgrade to v6

    It was the kernel :) I loaded a different kernel and now it seems to work. Thanks a lot for all your help Alwin
  5. K

    Failed to start Ceph disk activation: /dev/sd* and OSD's down after Proxmox upgrade to v6

    Now is the question why are these down after the upgrade and how can I fix them :) But I will dive into this and will let you know more soon...
  6. K

    Failed to start Ceph disk activation: /dev/sd* and OSD's down after Proxmox upgrade to v6

    I MAYBE found something, this bond looks DOWN: bond0: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether d2:6e:67:5f:24:71 brd ff:ff:ff:ff:ff:ff inet 10.0.1.2/24 brd 10.0.1.255 scope global bond0
  7. K

    Failed to start Ceph disk activation: /dev/sd* and OSD's down after Proxmox upgrade to v6

    Yes the keyring(s) exist. cat ceph.conf [global] auth client required = cephx auth cluster required = cephx auth service required = cephx cluster network = 10.0.1.0/24 fsid = 09935360-cfe7-48d4-ac76-c02e0fdd95de mon allow pool delete = true...
  8. K

    Failed to start Ceph disk activation: /dev/sd* and OSD's down after Proxmox upgrade to v6

    hereby some error from the ceph-osd.0.log: 2019-10-25 15:16:32.695452 7f9063aa4e80 0 _get_class not permitted to load sdk 2019-10-25 15:16:32.695624 7f9063aa4e80 0 <cls> /root/sources/pve/ceph/ceph-12.2.12/src/cls/cephfs/cls_cephfs.cc:197: loading cephfs 2019-10-25 15:16:32.695764...
  9. K

    Failed to start Ceph disk activation: /dev/sd* and OSD's down after Proxmox upgrade to v6

    Yes look at this: ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 15.71759 root default -3 2.61960 host node002 0 ssd 0.43660 osd.0 down 0 1.00000 1 ssd 0.43660 osd.1 down 0 1.00000 2 ssd 0.43660 osd.2 down 0 1.00000 3 ssd 0.43660 osd.3 down 0 1.00000 4 ssd 0.43660 osd.4 down...
  10. K

    Failed to start Ceph disk activation: /dev/sd* and OSD's down after Proxmox upgrade to v6

    journalctl -u ceph-osd@0.service -- Logs begin at Fri 2019-10-25 15:16:09 CEST, end at Fri 2019-10-25 15:59:42 CEST. -- Oct 25 15:16:31 node002 systemd[1]: Starting Ceph object storage daemon osd.0... Oct 25 15:16:31 node002 systemd[1]: Started Ceph object storage daemon osd.0. Oct 25 15:16:31...
  11. K

    Failed to start Ceph disk activation: /dev/sd* and OSD's down after Proxmox upgrade to v6

    ceph-disk list mount: /var/lib/ceph/tmp/mnt.tsdm0D: /dev/sdc1 already mounted or mount point busy. mount: /var/lib/ceph/tmp/mnt.Z4VfLh: /dev/sdd1 already mounted or mount point busy. mount: /var/lib/ceph/tmp/mnt.Qk3ToO: /dev/sde1 already mounted or mount point busy. mount...
  12. K

    Failed to start Ceph disk activation: /dev/sd* and OSD's down after Proxmox upgrade to v6

    ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 15.71759 root default -3 2.61960 host node002 0 ssd 0.43660 osd.0 down 0 1.00000 1 ssd 0.43660 osd.1...
  13. K

    Failed to start Ceph disk activation: /dev/sd* and OSD's down after Proxmox upgrade to v6

    root@node002:~# sudo systemctl start ceph-osd@osd.0 Job for ceph-osd@osd.0.service failed because the control process exited with error code. See "systemctl status ceph-osd@osd.0.service" and "journalctl -xe" for details. root@node002:~# systemctl status ceph-osd@osd.0.service ●...
  14. K

    Failed to start Ceph disk activation: /dev/sd* and OSD's down after Proxmox upgrade to v6

    No output with debsums -s. I added another monitor on another node (node002 is the problem node): ceph -s cluster: id: 09935360-cfe7-48d4-ac76-c02e0fdd95de health: HEALTH_OK services: mon: 2 daemons, quorum node003,node004 mgr: node003(active), standbys: node004...
  15. K

    Failed to start Ceph disk activation: /dev/sd* and OSD's down after Proxmox upgrade to v6

    ceph -s cluster: id: 09935360-cfe7-48d4-ac76-c02e0fdd95de health: HEALTH_OK services: mon: 2 daemons, quorum node003,node004 mgr: node003(active), standbys: node004, node006 osd: 36 osds: 30 up, 30 in data: pools: 1 pools, 1024 pgs objects: 941.03k...
  16. K

    Failed to start Ceph disk activation: /dev/sd* and OSD's down after Proxmox upgrade to v6

    ceph versions { "mon": { "ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)": 2 }, "mgr": { "ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)": 3 }, "osd": { "ceph version 12.2.11...
  17. K

    Failed to start Ceph disk activation: /dev/sd* and OSD's down after Proxmox upgrade to v6

    Is there also an option to force the Proxmox installation again?
  18. K

    Failed to start Ceph disk activation: /dev/sd* and OSD's down after Proxmox upgrade to v6

    The only think that went wrong was that I installed kernelcare, you have to delete this before you start the upgrade so I had to restart the upgrade but it finished.
  19. K

    Failed to start Ceph disk activation: /dev/sd* and OSD's down after Proxmox upgrade to v6

    I can't run 'ceph versions' on this node with failed OSD's. Doesn't give any output.