Hi all,
After losing the drive upon which the OS resides, how would one go about restoring the ceph cluster to full health without zapping them and then adding them back in using:
For simplicity the journal lives on the OSD. Currently, I’m running the latest Proxmox version along with Hammer.
I started looking through ceph documentation regarding creating and adding OSD’s manually, but I’m unable to mount the drive while following the docs: http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-osds/#adding-an-osd-manual
I apologize if I’ve overlooked this scenario; I’ve done a good day’s worth of searching forums.
Thanks!
After losing the drive upon which the OS resides, how would one go about restoring the ceph cluster to full health without zapping them and then adding them back in using:
Code:
pveceph createosd /dev/sdx
For simplicity the journal lives on the OSD. Currently, I’m running the latest Proxmox version along with Hammer.
Code:
root@prox-ceph1:/etc/pve# pveversion -v
proxmox-ve: 4.4-86 (running kernel: 3.16.0-4-amd64)
pve-manager: 4.4-13 (running version: 4.4-13/7ea56165)
pve-kernel-4.4.49-1-pve: 4.4.49-86
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0.1-1
pve-cluster: 4.0-49
qemu-server: 4.0-110
pve-firmware: 1.1-11
libpve-common-perl: 4.0-94
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-97
pve-firewall: 2.0-33
pve-ha-manager: 1.0-40
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
ceph: 0.94.10-1~bpo80+1
I started looking through ceph documentation regarding creating and adding OSD’s manually, but I’m unable to mount the drive while following the docs: http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-osds/#adding-an-osd-manual
I apologize if I’ve overlooked this scenario; I’ve done a good day’s worth of searching forums.
Thanks!