Migrating to bluestore now ? or later ?

Gerhard W. Recher

Well-Known Member
Mar 10, 2017
158
8
58
Munich
we will publish our 12.2.0 packages next week, ready for public testing.
Fine Tom !

would you recommend these steps in ceph documentation ?

also a question aside this... are my x3pro mellanox cards RDMA capable ? this would speed up ceph significant i guess ...
 
Fine Tom !

would you recommend these steps in ceph documentation ?

also a question aside this... are my x3pro mellanox cards RDMA capable ? this would speed up ceph significant i guess ...
Hi Tom,

any exact release dates for 12.2 ? this week is almost gone ....

regards

Gerhard
 
our Ceph stable repo is available under:

Code:
deb http://download.proxmox.com/debian/ceph-luminous stretch main

our Ceph test repo is available under:

Code:
deb http://download.proxmox.com/debian/ceph-luminous stretch test

currently both got 12.2.0 packages.
 
  • Like
Reactions: Pablo Alcaraz
our Ceph stable repo is available under:

Code:
deb http://download.proxmox.com/debian/ceph-luminous stretch main

our Ceph test repo is available under:

Code:
deb http://download.proxmox.com/debian/ceph-luminous stretch test

currently both got 12.2.0 packages.

Tom,

just updated cluster
and now I perform a inplace upgrade to bluestore
some strange results in gui!
only after shift reload in chrome i get ceph health results ....
my procedure for each osd will be:

Code:
ID=$1
echo "ceph osd out $ID"
ceph osd out $ID
# wait to start ceph remapping all things
sleep 10
while ! ceph health | grep HEALTH_OK ; do sleep 10 ; done
echo "systemctl stop ceph-osd@$ID.service"
systemctl stop ceph-osd@$ID.service
sleep 10
DEVICE=`mount | grep /var/lib/ceph/osd/ceph-$ID| cut -f1 -d"p"`

umount /var/lib/ceph/osd/ceph-$ID
echo "ceph-disk zap $DEVICE"
ceph-disk zap $DEVICE
ceph osd destroy $ID --yes-i-really-mean-it
echo "ceph-disk prepare --bluestore $DEVICE --osd-id $ID"
ceph-disk prepare --bluestore $DEVICE --osd-id $ID
#wait some seconds for metatdata visible
sleep 10;
ceph osd metadata $ID
ceph -s
echo "wait for cluster ok"
while ! ceph health | grep HEALTH_OK ; do echo -e "."; sleep 10 ; done
ceph -s
echo " proceed with next"
 
Last edited:
why don´t you use the pveceph commands or the GUI? and upgrade to latest packages on pvetest or pve-no-subscription, there are some fixes for the Ceph GUI inside.
 
why don´t you use the pveceph commands or the GUI? and upgrade to latest packages on pvetest or pve-no-subscription, there are some fixes for the Ceph GUI inside.
Tom,
Cluster is up-to-date with latest fixes from today.
all 4 nodes rebooted ( this fixes the refresh problem in gui as stated earlier ....)
I followed ceph instructions as stated in my initial post ... and i asked shall i do so or which recommendations do you have ....

how to accomplish a inplace migration to bluestore with gui ? i found no way ....

Code:
 pveversion  --verbose
proxmox-ve: 5.0-21 (running kernel: 4.10.17-3-pve)
pve-manager: 5.0-31 (running version: 5.0-31/27769b1f)
pve-kernel-4.10.17-2-pve: 4.10.17-20
pve-kernel-4.10.15-1-pve: 4.10.15-15
pve-kernel-4.10.17-3-pve: 4.10.17-21
pve-kernel-4.10.11-1-pve: 4.10.11-9
pve-kernel-4.10.17-1-pve: 4.10.17-18
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve3
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-12
qemu-server: 5.0-15
pve-firmware: 2.0-2
libpve-common-perl: 5.0-16
libpve-guest-common-perl: 2.0-11
libpve-access-control: 5.0-6
libpve-storage-perl: 5.0-14
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-9
pve-qemu-kvm: 2.9.1-1
pve-container: 2.0-15
pve-firewall: 3.0-2
pve-ha-manager: 2.0-2
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-3
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.11-pve17~bpo90
ceph: 12.2.0-pve1
 
your signature is misleading ..., still shows 5.0-30.

we will publish docs and howto for this, in the coming weeks, if you play with the commandline check our pveceph commands in detail.

> man pveceph
 
your signature is misleading ..., still shows 5.0-30.

we will publish docs and howto for this, in the coming weeks, if you play with the commandline check our pveceph commands in detail.

> man pveceph

tom,

just forgot to adopt my signature ....

please pardon my dust :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!