New Ceph 12.1.1 packages for testing, GUI for creating bluestore OSDs

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
754
1,741
223
We uploaded latest test packages to our test repositories, including Ceph Luminous 12.1.1 (still RC status) and a lot of bug fixes and smaller improvement on the Ceph GUI, including the ability to create bluestore OSDs via GUI (optional).

Screen-Create-Bluestore-OSD.png

The creation of Ceph monitors via GUI includes now a Ceph Manager daemon (ceph-mgr), if you just want to create a Ceph manager daemon, just enter "pveceph createmgr" on the CLI on the node where you want create it.

If you want to lean more about the details, take a deeper look into the change-log of the packages.

In order to get these testing packages, you have to use the following repositories and run as always "apt-get update && apt-get dist-upgrade":

Proxmox VE pvetest
deb http://download.proxmox.com/debian/pve stretch pvetest

Proxmox Ceph test
deb http://download.proxmox.com/debian/ceph-luminous stretch test

Please test and reports back!
__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
I'm more familiar with RedHat so aplogies if this is obvious, but I'm trying to switch to this test release, however when I do an apt-get update, it's still trying to update the "InRelease" branch. How do you force the test branch? I updated the no-subcription and standard ceph branches from the sources.list.d files, and currently updated to PVE 5.0-29.

root@atlc1n1:/etc/apt# apt-get update
Ign:1 xxxxxxn.org/debian stretch InRelease
Hit:2 xxxpxxxbian.org stretch/updates InRelease
Hit:3 xxxxs.debian.org/debian stretch Release
Hit:5 xxxx.proxmox.com/debian/ceph-luminous stretch InRelease
Hit:6 xxxxom/debian/pve stretch InRelease

url's removed
 
however when I do an apt-get update, it's still trying to update the "InRelease" branch
InRelease is just a special file in a debian repository which it tries to download, this has nothing to do with which repositories it tries to access
 
Hi!

Will it be possible to convert existing OSDs and pools into Bluestore format?

Best regards,
Gosha
 
Hi!

Will it be possible to convert existing OSDs and pools into Bluestore format?

Best regards,
Gosha

no, but you can replace the existing OSDs one by one with bluestore ones if you want (note that effectively means rebalancing all the data, so it will cause significant load on the cluster or take a very long time).
 
no, but you can replace the existing OSDs one by one with bluestore ones if you want (note that effectively means rebalancing all the data, so it will cause significant load on the cluster or take a very long time).
Thanks.
 
I think the gui should have "filestore" instead "bluestore", as filestore will be removed in the future and bluestore is now the default for luminous. (and proxmox 5.0 don't support cephserver < luminous)
 
FYI, just updated to 12.1.2 in latest update, and Ceph gets a warning because you have to enable applications now:

Need to run this command in a shell:
ceph osd pool application enable pveceph rbd
 
  • Like
Reactions: grin
FYI, just updated to 12.1.2 in latest update, and Ceph gets a warning because you have to enable applications now:

Need to run this command in a shell:
ceph osd pool application enable pveceph rbd

we are aware, patches are on the pve-devel list already and will be included in one of the next pve-manager updates.
 
When destroying a OSD and recreating it with Bluestore, it is not added to ceph.
Also tried it on the command line,

Code:
ceph-disk zap /dev/sdb
ceph osd purge 4 --yes-i-really-mean-it
ceph-disk prepare --bluestore /dev/sdb --osd-id 4

And nothing happens, the OSD is added to the tree,

Code:
-4       0.90869     host nod1
 3       0.90869         osd.3     up        0 1.00000
 4             0 osd.4           down        0 1.00000

But it is not shown in the GUI of Proxmox and it can't be controlled.

Also not when adding,

Code:
         enable experimental unrecoverable data corrupting features = bluestore rocksdb
         osd objectstore = bluestore

to the global section
 
Last edited:
Trying to start the OSD gives the following,

Code:
Aug 11 14:28:04 nod1 systemd[1]: Starting Ceph object storage daemon osd.4...
Aug 11 14:28:04 nod1 ceph-osd-prestart.sh[2558]: OSD data directory /var/lib/ceph/osd/ceph-4 does not exist; bailing out.
Aug 11 14:28:04 nod1 systemd[1]: ceph-osd@4.service: Control process exited, code=exited status=1
Aug 11 14:28:04 nod1 systemd[1]: Failed to start Ceph object storage daemon osd.4.
Aug 11 14:28:04 nod1 systemd[1]: ceph-osd@4.service: Unit entered failed state.
Aug 11 14:28:04 nod1 systemd[1]: ceph-osd@4.service: Failed with result 'exit-code'.

Trying to activate it gives the following,

Code:
root@nod1:/var/lib/ceph/osd# ceph-disk activate-all
got monmap epoch 8
*** Caught signal (Illegal instruction) **
 in thread 7f9217c37e00 thread_name:ceph-osd
 ceph version 12.1.2 (cd7bc3b11cdbe6fa94324b7322fb2a4716a052a7) luminous (rc)
 1: (()+0xa4cc84) [0x55c8f0aedc84]
 2: (()+0x110c0) [0x7f92154520c0]
 3: (rocksdb::VersionBuilder::SaveTo(rocksdb::VersionStorageInfo*)+0x871) [0x55c8f0f61161]
 4: (rocksdb::VersionSet::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> >                                                                                                              const&, bool)+0x26bc) [0x55c8f0e452fc]
 5: (rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > con                                                                                                             st&, bool, bool, bool)+0x11f) [0x55c8f0e0c71f]
 6: (rocksdb::DB::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>                                                                                                              > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<roc                                                                                                             ksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**)+0xe40) [0x55c8f0e0e190]
 7: (rocksdb::DB::Open(rocksdb::Options const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >                                                                                                              const&, rocksdb::DB**)+0x698) [0x55c8f0e0f9f8]
 8: (RocksDBStore::do_open(std::ostream&, bool)+0x908) [0x55c8f0a32938]
 9: (RocksDBStore::create_and_open(std::ostream&)+0xd7) [0x55c8f0a34377]
 10: (BlueStore::_open_db(bool)+0x326) [0x55c8f09bd3d6]
 11: (BlueStore::mkfs()+0x856) [0x55c8f09ee1d6]
 12: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > cons                                                                                                             t&, uuid_d, int)+0x346) [0x55c8f052a2b6]
 13: (main()+0xe48) [0x55c8f042f4f8]
 14: (__libc_start_main()+0xf1) [0x7f92144072b1]
 15: (_start()+0x2a) [0x55c8f050606a]
2017-08-11 15:28:15.619749 7f9217c37e00 -1 *** Caught signal (Illegal instruction) **
 in thread 7f9217c37e00 thread_name:ceph-osd

 ceph version 12.1.2 (cd7bc3b11cdbe6fa94324b7322fb2a4716a052a7) luminous (rc)
 1: (()+0xa4cc84) [0x55c8f0aedc84]
 2: (()+0x110c0) [0x7f92154520c0]
 3: (rocksdb::VersionBuilder::SaveTo(rocksdb::VersionStorageInfo*)+0x871) [0x55c8f0f61161]
 4: (rocksdb::VersionSet::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> >                                                                                                              const&, bool)+0x26bc) [0x55c8f0e452fc]
 5: (rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > con                                                                                                             st&, bool, bool, bool)+0x11f) [0x55c8f0e0c71f]
 6: (rocksdb::DB::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>                                                                                                              > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<roc                                                                                                             ksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**)+0xe40) [0x55c8f0e0e190]
 7: (rocksdb::DB::Open(rocksdb::Options const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >                                                                                                              const&, rocksdb::DB**)+0x698) [0x55c8f0e0f9f8]
 8: (RocksDBStore::do_open(std::ostream&, bool)+0x908) [0x55c8f0a32938]
 9: (RocksDBStore::create_and_open(std::ostream&)+0xd7) [0x55c8f0a34377]
 10: (BlueStore::_open_db(bool)+0x326) [0x55c8f09bd3d6]
 11: (BlueStore::mkfs()+0x856) [0x55c8f09ee1d6]
 12: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > cons                                                                                                             t&, uuid_d, int)+0x346) [0x55c8f052a2b6]
 13: (main()+0xe48) [0x55c8f042f4f8]
 14: (__libc_start_main()+0xf1) [0x7f92144072b1]
 15: (_start()+0x2a) [0x55c8f050606a]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

     0> 2017-08-11 15:28:15.619749 7f9217c37e00 -1 *** Caught signal (Illegal instruction) **
 in thread 7f9217c37e00 thread_name:ceph-osd

 ceph version 12.1.2 (cd7bc3b11cdbe6fa94324b7322fb2a4716a052a7) luminous (rc)
 1: (()+0xa4cc84) [0x55c8f0aedc84]
 2: (()+0x110c0) [0x7f92154520c0]
 3: (rocksdb::VersionBuilder::SaveTo(rocksdb::VersionStorageInfo*)+0x871) [0x55c8f0f61161]
 4: (rocksdb::VersionSet::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> >                                                                                                              const&, bool)+0x26bc) [0x55c8f0e452fc]
 5: (rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > con                                                                                                             st&, bool, bool, bool)+0x11f) [0x55c8f0e0c71f]
 6: (rocksdb::DB::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>                                                                                                              > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<roc                                                                                                             ksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**)+0xe40) [0x55c8f0e0e190]
 7: (rocksdb::DB::Open(rocksdb::Options const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >                                                                                                              const&, rocksdb::DB**)+0x698) [0x55c8f0e0f9f8]
 8: (RocksDBStore::do_open(std::ostream&, bool)+0x908) [0x55c8f0a32938]
 9: (RocksDBStore::create_and_open(std::ostream&)+0xd7) [0x55c8f0a34377]
 10: (BlueStore::_open_db(bool)+0x326) [0x55c8f09bd3d6]
 11: (BlueStore::mkfs()+0x856) [0x55c8f09ee1d6]
 12: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > cons                                                                                                             t&, uuid_d, int)+0x346) [0x55c8f052a2b6]
 13: (main()+0xe48) [0x55c8f042f4f8]
 14: (__libc_start_main()+0xf1) [0x7f92144072b1]
 15: (_start()+0x2a) [0x55c8f050606a]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

mount_activate: Failed to activate
ceph-disk: Command '['/usr/bin/ceph-osd', '--cluster', 'ceph', '--mkfs', '-i', u'4', '--monmap', '/var/lib/ceph/tmp/mnt.7mO3oP/a                                                                                                             ctivate.monmap', '--osd-data', '/var/lib/ceph/tmp/mnt.7mO3oP', '--osd-uuid', u'5f51ee6d-2398-40de-91e8-f099f1c011a4', '--setuser                                                                                                             ', 'ceph', '--setgroup', 'ceph']' returned non-zero exit status -4
Removed /etc/systemd/system/ceph-osd.target.wants/ceph-osd@3.service.
Created symlink /etc/systemd/system/ceph-osd.target.wants/ceph-osd@3.service -> /lib/systemd/system/ceph-osd@.service.
ceph-disk: Error: One or more partitions failed to activate
 
Last edited:
We uploaded latest Ceph Luminous 12.1.3 (probably the last RC) to our Ceph test repository.
(deb http://download.proxmox.com/debian/ceph-luminous stretch test)

As there were (again) some changes/reverts in the Ceph sources, there are some minor GUI glitches, CLI works.
These will be fixed together with the final 12.2.0 packages.
 
We uploaded latest Ceph Luminous 12.1.3 (probably the last RC) to our Ceph test repository.
(deb http://download.proxmox.com/debian/ceph-luminous stretch test)

As there were (again) some changes/reverts in the Ceph sources, there are some minor GUI glitches, CLI works.
These will be fixed together with the final 12.2.0 packages.

still not ready for older Xeon's processors.
 
yes, here is the bugtracker link - http://tracker.ceph.com/issues/20529

your Xeon's E5335U are about 10 years old (was released more than 10 years ago), so I really wonder why you use such old and outdated hardware for a high availability setup and/or a virtualization host - or just testing?

Those are my ceph/backup nodes, they are more then sufficient for that.....
And incidentally a test LXC container, for which they work fine aswell.
 
root@bowie:~# pveceph createmgr
creating manager directory '/var/lib/ceph/mgr/ceph-bowie'
creating keys for 'mgr.bowie'
unable to open file '/var/lib/ceph/mgr/ceph-bowie/keyring.tmp.24284' - No such file or directory
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!