OSDs are filestore at creation, although not specified

jsterr

Renowned Member
Jul 24, 2020
772
210
68
32
I reinstalled one of the three proxmox ceph nodes with new name, new ip.
I removed all lvm data, wiped fs of the old disks with:

Code:
dmsetup remove_all
wipefs -af /dev/sda
ceph-volume lvm zap /dev/sda

Now when I create osds via gui or cli they are always filestore and I dont get it, default should be bluestore regarding documentation.

Code:
pveceph osd create /dev/sda -db_dev /dev/nvme0n1 -db_size 75


Code:
root@pve-04:/# cat /etc/pve/ceph.conf
[global]
         auth_client_required = cephx
         auth_cluster_required = cephx
         auth_service_required = cephx
         cluster_network = 10.6.6.221/24
         fsid = aeb02f58-8b3d-4b5d-8cf2-a16af4959bbf
         mon_allow_pool_delete = true
         mon_host = 10.5.5.221 10.5.5.223 10.5.5.225
         osd_pool_default_min_size = 2
         osd_pool_default_size = 3
         public_network = 10.5.5.221/24

[client]
         keyring = /etc/pve/priv/$cluster.$name.keyring

[mds]
         keyring = /var/lib/ceph/mds/ceph-$id/keyring

[mds.pve-01]
         host = pve-01
         mds_standby_for_name = pve

[mds.pve-03]
         host = pve-03
         mds_standby_for_name = pve

[mon.pve-01]
         public_addr = 10.5.5.221

[mon.pve-03]
         public_addr = 10.5.5.223

[mon.pve-04]
         public_addr = 10.5.5.225

Code:
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable chooseleaf_stable 1
tunable straw_calc_version 1
tunable allowed_bucket_algs 54

# devices
device 0 osd.0 class hdd
device 1 osd.1 class hdd
device 2 osd.2 class hdd
device 3 osd.3 class hdd
device 4 osd.4 class hdd
device 5 osd.5 class hdd
device 11 osd.11 class hdd
device 13 osd.13 class hdd
device 14 osd.14 class hdd
device 15 osd.15 class hdd
device 16 osd.16 class hdd
device 17 osd.17 class hdd

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 zone
type 10 region
type 11 root

# buckets
host pve-01 {
    id -3        # do not change unnecessarily
    id -4 class hdd        # do not change unnecessarily
    # weight 5.898
    alg straw2
    hash 0    # rjenkins1
    item osd.0 weight 0.983
    item osd.1 weight 0.983
    item osd.2 weight 0.983
    item osd.3 weight 0.983
    item osd.4 weight 0.983
    item osd.5 weight 0.983
}
host pve-03 {
    id -7        # do not change unnecessarily
    id -8 class hdd        # do not change unnecessarily
    # weight 5.898
    alg straw2
    hash 0    # rjenkins1
    item osd.11 weight 0.983
    item osd.13 weight 0.983
    item osd.14 weight 0.983
    item osd.15 weight 0.983
    item osd.16 weight 0.983
    item osd.17 weight 0.983
}
host pve-04 {
    id -5        # do not change unnecessarily
    id -6 class hdd        # do not change unnecessarily
    # weight 0.000
    alg straw2
    hash 0    # rjenkins1
}
root default {
    id -1        # do not change unnecessarily
    id -2 class hdd        # do not change unnecessarily
    # weight 11.794
    alg straw2
    hash 0    # rjenkins1
    item pve-01 weight 5.897
    item pve-03 weight 5.897
    item pve-04 weight 0.000
}

# rules
rule replicated_rule {
    id 0
    type replicated
    min_size 1
    max_size 10
    step take default
    step chooseleaf firstn 0 type host
    step emit
}

# end crush map
[
 
Last edited:
Now when I create osds via gui or cli they are always filestore and I dont get it, default should be bluestore regarding documentation.
how did you determine they are filestore?

what is your pveversion -v ?
 
how did you determine they are filestore?

what is your pveversion -v ?

GUI says filestore, all other osds from other nodes are bluestore.

root@pve-04:~# pveversion -v
proxmox-ve: 6.2-2 (running kernel: 5.4.34-1-pve)
pve-manager: 6.2-15 (running version: 6.2-15/48bd51b6)
pve-kernel-5.4: 6.3-1
pve-kernel-helper: 6.3-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph: 14.2.11-pve1
ceph-fuse: 14.2.11-pve1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-4
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-10
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.1-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.3-10
pve-cluster: 6.2-1
pve-container: 3.2-4
pve-docs: 6.2-6
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-6
pve-xtermjs: 4.7.0-2
qemu-server: 6.2-20
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1
 
I'm running into this same issue with v8.1.3. If I create an OSD that uses a separate block.db the UI shows the OSD as filestore. While the CLI ceph call shows it as bluestore.

Update: Looks like the display may default to filestore as it eventually updated to bluestore.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!