Hi community!
I have updated ceph 12 to 14 according to the following document - https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus but I have a problem with monitor
Is it possible to get around the error? Thanks!
I have updated ceph 12 to 14 according to the following document - https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus but I have a problem with monitor
Bash:
root@pve03:/etc/ceph# pveceph createmon
monitor 'pve03' already exists
Bash:
root@pve03:/etc/ceph# pveceph destroymon pve03
no such monitor id 'pve03'
Bash:
root@pve03:/etc/ceph# cat /etc/pve/ceph.conf
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 192.168.0.0/24
fsid = e1ee6b28-xxxx-xxxx-xxxx-11d1f6efab9b
mon allow pool delete = true
osd journal size = 5120
osd pool default min size = 2
osd pool default size = 3
public network = 192.168.0.0/24
ms_bind_ipv4 = true
ms_bind_ipv6 = false
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
[mon.pve02]
host = pve02
mon addr = 192.168.0.57
Bash:
root@pve03:/etc/ceph# ll /var/lib/ceph/mon/
total 0
Bash:
root@pve03:/etc/ceph# ps aux | grep ceph
root 861026 0.0 0.0 17308 9120 ? Ss 19:03 0:00 /usr/bin/python2.7 /usr/bin/ceph-crash
ceph 863641 0.0 0.2 492588 169916 ? Ssl 19:08 0:04 /usr/bin/ceph-mgr -f --cluster ceph --id pve03 --setuser ceph --setgroup ceph
root 890587 0.0 0.0 6072 892 pts/0 S+ 20:43 0:00 grep ceph
Bash:
root@pve03:~# ceph mon dump
dumped monmap epoch 9
epoch 9
fsid e1ee6b28-xxxx-xxxx-xxxx-11d1f6efab9b
last_changed 2019-10-05 19:07:48.598830
created 2019-05-11 01:28:04.534419
min_mon_release 14 (nautilus)
0: [v2:192.168.0.57:3300/0,v1:192.168.0.57:6789/0] mon.pve02
Bash:
root@pve03:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.21-2-pve)
pve-manager: 6.0-7 (running version: 6.0-7/28984024)
pve-kernel-5.0: 6.0-8
pve-kernel-helper: 6.0-8
pve-kernel-4.15: 5.4-9
pve-kernel-5.0.21-2-pve: 5.0.21-6
pve-kernel-4.15.18-21-pve: 4.15.18-48
pve-kernel-4.15.18-14-pve: 4.15.18-39
pve-kernel-4.15.18-12-pve: 4.15.18-36
ceph: 14.2.4-pve1
ceph-fuse: 14.2.4-pve1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.12-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-5
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-9
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-65
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-7
pve-cluster: 6.0-7
pve-container: 3.0-7
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-3
pve-qemu-kvm: 4.0.0-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-7
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve2
Code:
syslog:
Oct 05 19:57:47 pve03 systemd[1]: Started Ceph cluster monitor daemon.
Oct 05 19:57:47 pve03 ceph-mon[875279]: 2019-10-05 19:57:47.506 7ffb1227f440 -1 monitor data directory at '/var/lib/ceph/mon/ceph-pve03' does not exist: have you run 'mkfs'?
Oct 05 19:57:47 pve03 systemd[1]: ceph-mon@pve03.service: Main process exited, code=exited, status=1/FAILURE
Oct 05 19:57:47 pve03 systemd[1]: ceph-mon@pve03.service: Failed with result 'exit-code'.
Oct 05 19:57:57 pve03 systemd[1]: ceph-mon@pve03.service: Service RestartSec=10s expired, scheduling restart.
Oct 05 19:57:57 pve03 systemd[1]: ceph-mon@pve03.service: Scheduled restart job, restart counter is at 4.
Oct 05 19:57:57 pve03 systemd[1]: Stopped Ceph cluster monitor daemon.
Oct 05 19:57:57 pve03 systemd[1]: ceph-mon@pve03.service: Start request repeated too quickly.
Oct 05 19:57:57 pve03 systemd[1]: ceph-mon@pve03.service: Failed with result 'exit-code'.
Oct 05 19:57:57 pve03 systemd[1]: Failed to start Ceph cluster monitor daemon.
Is it possible to get around the error? Thanks!
Last edited: