[SOLVED] from zfs to ceph, problems creating osds

dirks

Member
Feb 6, 2016
26
19
23
44
Hi,

I ran into a problem yesterday when adding osds to a ceph cluster using pveceph createosd. pveceph reported success, but running ceph-deploy disk list listed the osds as unprepared, ceph-osd services were not running and /var/lib/ceph/osd was empty.

ceph-disk zap, a simple wipefs, zeroing the first 1024 blocks and creating a new partition table (regardless if msdos or gpt) did not change the result. After a lot of further debugging I noticed the devices had the zfs_member flag set, as the disks in that node were used with zfs beforehand. A wipefs --all on each of the devices cleared the flag. That was followed by a ceph-disk zap (to be sure) and a pvecreate, which finally resulted in working osds.

Just thought I share that experience, as the solution given at https://pve.proxmox.com/wiki/Ceph_Server did not work for me and both ceph and zfs are probably quite common storage solutions used by proxmox users.
 
I saw the same with early 5.0, but it was fixed with a later 5.0 via
pveceph purge ; apt-get update ; apt-get upgrade