apologies if this belongs in a different forum. I set up a cluster using Proxmox5/stretch + Ceph 12 in the lab. Here are some observations that may be useful for UX purposes:
1. the default rbd pool was always a needless nuisance but was always easy to delete- but with Luminous the default behavior is to deny pool deletion. This is generally the correct behavior but it creates a UX problem where there is a default, useless pool created at pveceph install and cannot be removed. This will also affect deletion of any pools, so the global variable for pool deletion (
mon_allow_pool_delete) should either be set OR provide an alternative, one time method to do so via the GUI.
2. creating OSDs is a very iffy proposition; the process completes well enough from either the GUI or CLI, but it does not add them to the crush map. I got it to add the OSDs from one node ONCE; other nodes did not- they're not even showing up as available hosts in the crush map. I am following the normal process of ceph-disk zap followed by pveceph createosd. The OSD creation does appear in the tasks logs and completes successfully, but does not mount or create a process.
pveversion -v
pveceph status attached.
1. the default rbd pool was always a needless nuisance but was always easy to delete- but with Luminous the default behavior is to deny pool deletion. This is generally the correct behavior but it creates a UX problem where there is a default, useless pool created at pveceph install and cannot be removed. This will also affect deletion of any pools, so the global variable for pool deletion (
mon_allow_pool_delete) should either be set OR provide an alternative, one time method to do so via the GUI.
2. creating OSDs is a very iffy proposition; the process completes well enough from either the GUI or CLI, but it does not add them to the crush map. I got it to add the OSDs from one node ONCE; other nodes did not- they're not even showing up as available hosts in the crush map. I am following the normal process of ceph-disk zap followed by pveceph createosd. The OSD creation does appear in the tasks logs and completes successfully, but does not mount or create a process.
pveversion -v
proxmox-ve: 5.0-6 (running kernel: 4.10.8-1-pve)
pve-manager: 5.0-9 (running version: 5.0-9/c7bdd872)
pve-kernel-4.4.19-1-pve: 4.4.19-66
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.4.49-1-pve: 4.4.49-86
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.10.8-1-pve: 4.10.8-6
pve-kernel-4.2.2-1-pve: 4.2.2-16
pve-kernel-4.2.8-1-pve: 4.2.8-41
pve-kernel-4.4.16-1-pve: 4.4.16-64
libpve-http-server-perl: 2.0-2
lvm2: 2.02.168-pve2
corosync: 2.4.2-pve2
libqb0: 1.0.1-1
pve-cluster: 5.0-4
qemu-server: 5.0-4
pve-firmware: 2.0-2
libpve-common-perl: 5.0-8
libpve-guest-common-perl: 2.0-1
libpve-access-control: 5.0-3
libpve-storage-perl: 5.0-3
pve-libspice-server1: 0.12.8-3
vncterm: 1.4-1
pve-docs: 5.0-1
pve-qemu-kvm: 2.9.0-1
pve-container: 2.0-6
pve-firewall: 3.0-1
pve-ha-manager: 2.0-1
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.7-500
lxcfs: 2.0.6-pve500
criu: 2.11.1-1~bpo90
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90
ceph: 12.0.1-pve1
pve-manager: 5.0-9 (running version: 5.0-9/c7bdd872)
pve-kernel-4.4.19-1-pve: 4.4.19-66
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.4.49-1-pve: 4.4.49-86
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.10.8-1-pve: 4.10.8-6
pve-kernel-4.2.2-1-pve: 4.2.2-16
pve-kernel-4.2.8-1-pve: 4.2.8-41
pve-kernel-4.4.16-1-pve: 4.4.16-64
libpve-http-server-perl: 2.0-2
lvm2: 2.02.168-pve2
corosync: 2.4.2-pve2
libqb0: 1.0.1-1
pve-cluster: 5.0-4
qemu-server: 5.0-4
pve-firmware: 2.0-2
libpve-common-perl: 5.0-8
libpve-guest-common-perl: 2.0-1
libpve-access-control: 5.0-3
libpve-storage-perl: 5.0-3
pve-libspice-server1: 0.12.8-3
vncterm: 1.4-1
pve-docs: 5.0-1
pve-qemu-kvm: 2.9.0-1
pve-container: 2.0-6
pve-firewall: 3.0-1
pve-ha-manager: 2.0-1
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.7-500
lxcfs: 2.0.6-pve500
criu: 2.11.1-1~bpo90
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90
ceph: 12.0.1-pve1
pveceph status attached.