proxmox-ve: 5.0-16 (running kernel: 4.10.17-1-pve)
pve-manager: 5.0-23 (running version: 5.0-23/af4267bf)
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.67-1-pve: 4.4.67-92
pve-kernel-4.4.59-1-pve: 4.4.59-87
pve-kernel-4.10.17-1-pve: 4.10.17-16
libpve-http-server-perl: 2.0-5
lvm2: 2.02.168-pve2
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-12
qemu-server: 5.0-14
pve-firmware: 2.0-2
libpve-common-perl: 5.0-16
libpve-guest-common-perl: 2.0-11
libpve-access-control: 5.0-5
libpve-storage-perl: 5.0-12
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-9
pve-qemu-kvm: 2.9.0-2
pve-container: 2.0-14
pve-firewall: 3.0-2
pve-ha-manager: 2.0-2
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-3
lxcfs: 2.0.7-pve2
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90
ceph: 12.1.1-1~bpo80+1
what does
Code:ceph status
say?
ceph status
cluster:
id: e464de46-c48e-4df9-b7ce-6288d78dea5e
health: HEALTH_WARN
no active mgr
services:
mon: 1 daemons, quorum 0
mgr: no daemons active
osd: 8 osds: 8 up, 8 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 0 kB used, 0 kB / 0 kB avail
pgs:
it seems you did not follow https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0ceph: 12.1.1-1~bpo80+1
/dev/sda :
/dev/sda1 ceph data, active, cluster ceph, osd.2, journal /dev/nvme0n1p6
/dev/sdb :
/dev/sdb1 ceph data, active, cluster ceph, osd.4, journal /dev/nvme0n1p8
what does ceph status now say ?
and the following:
Code:systemctl status ceph ceph-osd ls /var/lib/ceph/osd/
Package versions
proxmox-ve: 4.4-92 (running kernel: 4.4.67-1-pve)
pve-manager: 4.4-15 (running version: 4.4-15/7599e35a)
pve-kernel-4.4.44-1-pve: 4.4.44-84
pve-kernel-4.4.67-1-pve: 4.4.67-92
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0.1-1
pve-cluster: 4.0-52
qemu-server: 4.0-110
pve-firmware: 1.1-11
libpve-common-perl: 4.0-95
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-101
pve-firewall: 2.0-33
pve-ha-manager: 1.0-41
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80
ceph: 12.1.1-1~bpo80+1
at the moment the luminous packages from ceph.com are one version above ours (12.1.1 vs 12.1.0) but an update to our repository is coming soonIs it possible that the Howto and or the upgrade path is wrong?
Having the exact same problem after upgrading to ceph 12.1.1. Any sollution?
# pveversion -v
...
ceph: 12.1.2-1~bpo90+1
# cat /etc/apt/sources.list.d/ceph.list
deb http://download.proxmox.com/debian/ceph-luminous stretch main
# cat /etc/apt/sources.list.d/ceph.list
deb http://download.ceph.com/debian-luminous jessie main
# pveversion -v
....
ceph: 12.1.2-pve1
# pveceph createmgr
creating manager directory '/var/lib/ceph/mgr/ceph-prox-nest3'
creating keys for 'mgr.prox-nest3'
setting owner for directory
enabling service 'ceph-mgr@prox-nest3.service'
Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@prox-nest3.service -> /lib/systemd/system/ceph-mgr@.service.
starting service 'ceph-mgr@prox-nest3.service'
# ceph -s
cluster:
id: 486f2cf4-ca81-46cd-8947-b7e0a6e8e47e
health: HEALTH_WARN
application not enabled on 1 pool(s)
services:
mon: 3 daemons, quorum 0,1,2
mgr: prox-nest3(active)
osd: 9 osds: 9 up, 9 in
data:
pools: 2 pools, 576 pgs
objects: 1596 objects, 6208 MB
usage: 18757 MB used, 881 GB / 899 GB avail
pgs: 576 active+clean
We use essential cookies to make this site work, and optional cookies to enhance your experience.