[SOLVED] with the lastest PVE & CEPH 0.94.9, create OSD failed

lynn_yudi

Active Member
Nov 27, 2011
86
0
26
hi,

with this same issue, but can't fixed.
https://forum.proxmox.com/threads/proxmox-4-2-ceph-hammer-create-osd-failed.28047/

so, create new thread for this issue here.

Code:
# pveversion -v
proxmox-ve: 4.2-64 (running kernel: 4.4.16-1-pve)
pve-manager: 4.2-18 (running version: 4.2-18/158720b9)
pve-kernel-4.4.16-1-pve: 4.4.16-64
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-44
qemu-server: 4.0-86
pve-firmware: 1.1-9
libpve-common-perl: 4.0-72
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-57
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-qemu-kvm: 2.6.1-2
pve-container: 1.0-73
pve-firewall: 2.0-29
pve-ha-manager: 1.0-33
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.4-1
lxcfs: 2.0.3-pve1
cgmanager: 0.39-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
zfsutils: 0.6.5.7-pve10~bpo80
openvswitch-switch: 2.5.0-1
ceph: 0.94.9-1~bpo80+1

Code:
# ceph -v
ceph version 0.94.9 (fe6d859066244b97b24f09d46552afc2071e6f90)

Code:
[global]
         auth client required = cephx
         auth cluster required = cephx
         auth service required = cephx
         auth supported = cephx
         cluster network = 192.168.0.0/16
         filestore xattr use omap = true
         fsid = babd2e4d-a6b9-4c21-9b46-98bc87cbe28d
         keyring = /etc/pve/priv/$cluster.$name.keyring
         max open files = 131072
         mon clock drift allowed = 1
         mon clock drift warn backoff = 30
         mon osd down out interval = 600
         mon osd full ratio = .95
         mon osd nearfull ratio = .75
         mon osd report timeout = 300
         osd journal size = 20480
         osd pool default min size = 1
         osd pool default size = 2
         public network = 192.168.0.0/16

[osd]
         filestore max sync interval = 15
         filestore min sync interval = 10
         filestore queue committing max bytes = 10485760000
         filestore queue committing max ops = 5000
         filestore queue max bytes = 10485760
         filestore queue max ops = 25000
         journal max write bytes = 1073714824
         journal max write entries = 10000
         journal queue max bytes = 10485760000
         journal queue max ops = 50000
         keyring = /var/lib/ceph/osd/ceph-$id/keyring
         osd client message size cap = 2147483648
         osd deep scrub stride = 131072
         osd disk threads = 4
         osd map cache bl size = 128
         osd map cache size = 1024
         osd max backfills = 4
         osd max write size = 512
         osd mkfs options xfs = -f
         osd mkfs type = xfs
         osd mount options xfs = rw,noatime,inode64,logbsize=256k,allocsize=4M
         osd op threads = 8
         osd recovery max active = 10
         osd recovery op priority = 4
         rbd cache = true
         rbd cache max dirty = 134217728
         rbd cache max dirty age = 5
         rbd cache size = 268435456
         rbd cache writethrough until flush = false

[mon.2]
         host = test03
         mon addr = 192.168.7.5:6789

[mon.0]
         host = test01
         mon addr = 192.168.7.1:6789

[mon.1]
         host = test02
         mon addr = 192.168.7.3:6789

how to fix this issue? :<
 
whats the error message you get?
 
whats the error message you get?
no, can't find error log for this

Code:
()
create OSD on /dev/sdb (xfs)
Creating new GPT entries.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.
Setting name!
partNum is 1
REALLY setting name!
The operation has completed successfully.
Setting name!
partNum is 0
REALLY setting name!
The operation has completed successfully.
meta-data=/dev/sdb1              isize=2048   agcount=4, agsize=242843583 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=971374331, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=474303, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
The operation has completed successfully.
TASK OK

Code:
# dmesg
[16083.406397] XFS (sdb): Unmounting Filesystem
[16098.372592]  sdb:
[16098.394675]  sdb:
[16098.563106]  sdb:
[16099.570557]  sdb: sdb2
[16099.784371]  sdb: sdb2
[16099.940561]  sdb: sdb2
[16100.948229]  sdb: sdb1 sdb2
[16101.181911]  sdb: sdb1 sdb2
[16111.167183] XFS (sdb1): Mounting V4 Filesystem
[16112.780662] XFS (sdb1): Ending clean mount
[16112.804822] XFS (sdb1): Unmounting Filesystem
[16112.812107]  sdb: sdb1 sdb2
[16113.255426] XFS (sdb1): Mounting V4 Filesystem
[16113.267822] XFS (sdb1): Ending clean mount
[16113.411026] XFS (sdb1): Unmounting Filesystem
[16113.820586]  sdb: sdb1 sdb2
[16113.971017] XFS (sdb1): Mounting V4 Filesystem
[16113.975759] XFS (sdb1): Ending clean mount
[16114.127050] XFS (sdb1): Unmounting Filesystem
[16114.170797]  sdb: sdb1 sdb2
[16114.325369] XFS (sdb1): Mounting V4 Filesystem
[16114.329812] XFS (sdb1): Ending clean mount
[16114.487053] XFS (sdb1): Unmounting Filesystem
 
thanks...
This problem has been solved.

just with /var/lib/ceph/bootstrap-osd/ceph.keyring is old/other.
 
Hello,

I have a question about your ceph configure. Why you choose osd_pool_default_size = 2 ? Thanks.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!