GUI don't allow partitions as Ceph OSD journal devices

grin

Renowned Member
Dec 8, 2008
174
22
83
Hungary
grin.hu
GUI don't allow partitions as Ceph OSD journal devices, and it should. We do it manually, but there seems to be no reason not to handle it.

(Sidenote: the "disks" tab in the new gui is empty, probably should contain physical disks which have, say, SMART status.)
 
do you use very latest version?

please check with:

> pveversion -v
 
Yes. Fresh install.
proxmox-ve: 4.3-66 (running kernel: 4.4.19-1-pve)
pve-manager: 4.3-1 (running version: 4.3-1/e7cdc165)
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-46
qemu-server: 4.0-88
pve-firmware: 1.1-9
libpve-common-perl: 4.0-73
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-61
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-qemu-kvm: 2.6.1-6
pve-container: 1.0-75
pve-firewall: 2.0-29
pve-ha-manager: 1.0-35
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.4-1
lxcfs: 2.0.3-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
zfsutils: 0.6.5.7-pve10~bpo80
ceph: 0.94.9-1~bpo80+1
 
there were some bugs related to the disk handling,
please update to the latest version :)
 
  • Like
Reactions: grin
Indeed, it's been fixed.
Still the original problem exists: ceph journals cannot be partitions, only devices.

One possible way to use SSDs is to partition it among the OSDs (which seems to be a common failure point but since a pool consists of dozens of OSDs on plenty of machines its risk is manageable, since server-grade SSDs usually don't just go dead without plenty of warnings; and they cost helluva lot of dinero :)).