[SOLVED] from zfs to ceph, problems creating osds

dirks

Member
Feb 6, 2016
26
17
23
43
Hi,

I ran into a problem yesterday when adding osds to a ceph cluster using pveceph createosd. pveceph reported success, but running ceph-deploy disk list listed the osds as unprepared, ceph-osd services were not running and /var/lib/ceph/osd was empty.

ceph-disk zap, a simple wipefs, zeroing the first 1024 blocks and creating a new partition table (regardless if msdos or gpt) did not change the result. After a lot of further debugging I noticed the devices had the zfs_member flag set, as the disks in that node were used with zfs beforehand. A wipefs --all on each of the devices cleared the flag. That was followed by a ceph-disk zap (to be sure) and a pvecreate, which finally resulted in working osds.

Just thought I share that experience, as the solution given at https://pve.proxmox.com/wiki/Ceph_Server did not work for me and both ceph and zfs are probably quite common storage solutions used by proxmox users.
 
I saw the same with early 5.0, but it was fixed with a later 5.0 via
pveceph purge ; apt-get update ; apt-get upgrade
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!