Hi, i have many problems to install Ceph OSD on ZFS
I Get you complet solution to resolve it:
Step 1. (repeat on all machine)
Install Ceph - #pveceph install
Step 2. (run only in main machine on cluster)
Init ceph - #pveceph init --network 10.0.0.0/24 -disable_cephx 1
10.0.0.0/24 - your local network
-disable_cephx 1 - disable auth - solve many problem
Step 3. (repeat on all machine uses ceph)
Create Mon and Mgr
#pveceph create mon
#pveceph create mgr
Step 4. (repeat on all machine providing osd)
Create a simple dev on default rpool to use by ceph osd.
#zfs create -V 100G rpool/ceph-osd
ZAP and CREATE OSD
#ceph-volume lvm zap /dev/zd0
#ceph-volume raw prepare --data /dev/zd0 --bluestore --no-tmpfs --crush-device-class HDD
#ceph-volume raw activate --device /dev/zd0 --no-tmpfs --no-systemd
Start OSD and add to start after reboot:
#systemctl start ceph-osd@0
#systemctl enable ceph-osd@0
Fix disk permision
#echo 'KERNEL=="zd0", OWNER="ceph", GROUP="ceph"' >> /etc/udev/rules.d/99-perm.rules
I Get you complet solution to resolve it:
Step 1. (repeat on all machine)
Install Ceph - #pveceph install
Step 2. (run only in main machine on cluster)
Init ceph - #pveceph init --network 10.0.0.0/24 -disable_cephx 1
10.0.0.0/24 - your local network
-disable_cephx 1 - disable auth - solve many problem
Step 3. (repeat on all machine uses ceph)
Create Mon and Mgr
#pveceph create mon
#pveceph create mgr
Step 4. (repeat on all machine providing osd)
Create a simple dev on default rpool to use by ceph osd.
#zfs create -V 100G rpool/ceph-osd
ZAP and CREATE OSD
#ceph-volume lvm zap /dev/zd0
#ceph-volume raw prepare --data /dev/zd0 --bluestore --no-tmpfs --crush-device-class HDD
#ceph-volume raw activate --device /dev/zd0 --no-tmpfs --no-systemd
Start OSD and add to start after reboot:
#systemctl start ceph-osd@0
#systemctl enable ceph-osd@0
Fix disk permision
#echo 'KERNEL=="zd0", OWNER="ceph", GROUP="ceph"' >> /etc/udev/rules.d/99-perm.rules