New Ceph 12.1.1 packages for testing, GUI for creating bluestore OSDs

root@bowie:~# pveceph createmgr
creating manager directory '/var/lib/ceph/mgr/ceph-bowie'
creating keys for 'mgr.bowie'
unable to open file '/var/lib/ceph/mgr/ceph-bowie/keyring.tmp.24284' - No such file or directory

that means that creating a temporary file in /var/lib/ceph/mgr/ceph-bowie failed. did the directory creation actually succeed? (i.e., does /var/lib/ceph/mgr/ceph-bowie exist?)
 
  • Like
Reactions: grin
that means that creating a temporary file in /var/lib/ceph/mgr/ceph-bowie failed. did the directory creation actually succeed? (i.e., does /var/lib/ceph/mgr/ceph-bowie exist?)
Neither that nor /var/lib/ceph/mgr does exist.
It seems that by some magical reason the ceph-mgr package doesn't work unless it's actually installed. :-]
The moral of the story, though, is that pveceph should verify the directory existence (and thus the existence of the installed package) before acting.
 
Neither that nor /var/lib/ceph/mgr does exist.
It seems that by some magical reason the ceph-mgr package doesn't work unless it's actually installed. :-]
The moral of the story, though, is that pveceph should verify the directory existence (and thus the existence of the installed package) before acting.

could you file a bug for this? (feel free to generalize it a bit, as it is not only ceph-mgr that is hit by this). thanks!
 
  • Like
Reactions: grin
Hi,

Seems that 12.1.4 is available and fix some upgrade issues on 12.1.3....wait and see ;)

What will happen to the ones like me who have installed Luminous RC on pve 5 (when stable version is ready )? We must switch to the official repository or keep yours ?

Thanks again for your work.

Antoine
 
Hi,

Seems that 12.1.4 is available and fix some upgrade issues on 12.1.3....wait and see ;)

What will happen to the ones like me who have installed Luminous RC on pve 5 (when stable version is ready )? We must switch to the official repository or keep yours ?

Thanks again for your work.

Antoine

we will provide stable packages for Proxmox VE via download.proxmox.com
 
Thanks.

I must admit that performance with Luminous and Bluestore is really amazing, even with only few sas drives...
 
  • Like
Reactions: jermudgeon
I've been testing PVE 5.0 with ceph 12.1.x (luminous release candidate).

•cephfs on ec pool seems OK; want to do a bit more testing, but performance is pretty good

•rbd on ec pool doesn't work yet from the GUI — the CLI requires a new option --data-pool, which specifies which ec pool to actually store the data in; a replicated pool provides the metadata, and that's the pool that would be listed in storage.cfg. So it looks like we need some reworking of the underlying rbd commands in order to be able to use the GUI.

I'll test manually migrating a VM (creating the files on a different pool first).
 
Thanks,

BTW, what is the right method to uninstall Ceph in pve 5 ? If we want to reinstall it properly or simply because we dont want to use it anymore.
(After changing storage for each VM of course)

Best regards,

Antoine
 
remove the storage entry, stop/remove/destroy all OSDs, managers, and monitors (in that order), remove any leftover config files and directories in /etc/ceph/ and /var/lib/ceph/**/ (don't remove the /etc/ceph and /var/lib/ceph/* directories themselves though!). remove the ceph-related files in /etc/pve/priv/ . if you want, you can then remove all the ceph-* packages except ceph-common, but that is not really needed (without a config and without any service instances, the packages don't do anything).

DISCLAIMER: all of the above is from memory and not tested ;) please try it out on a test system before attempting anything like this on a production system.
 
Hi there,

Sorry for that question, but

Does a new Ceph have a same issue with slow backup to NFS share

Also what's date is planning for a release?

Thanks a lot

Alex
 
we have a production ceph storage in use. I want to set up another ceph cluster to test before upgrading.
we use an unmanaged 10G switch for ceph .

Is it OK to use the same network IP subnet for ceph on production and test? or might that mess things up?
 
- pveceph install --version luminous makes a ceph.list with wrong sources. so had to do this to get pvetest debs:
Code:
echo 'deb http://download.proxmox.com/debian/ceph-luminous stretch test' > /etc/apt/sources.list.d/ceph-test.list
then after install
Code:
rm /etc/apt/sources.list.d/ceph.list
 
Last edited:
Is it correct that setting tunables to optimal doesn't work?

root@nod1:~/ceph# ceph osd crush tunables optimal
Error EINVAL: new crush map requires client version jewel but require_min_compat_client is firefly

It's a protection if you have still older client connected. (old librbd for qemu, old kernel for lxc, maybe some stuck "rbd ..." command launched previously by proxmox,....)
 
Is there any plan to promote release the 12.1.4 RC :rolleyes: to the regular updates instead 12.1.2? :cool:

I want to mention that I really appreciate Proxmox, because their staff and the community. :)

Thanks
 
Hi, the 12.1.4 rc is available on the pve-test repository...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!