The reboot is like a hard reset of the server. There is nothing in log, won't be more normal and make debug more easy to at least write a line in syslog or somewhere saying HA force restart the node ?
We have 4 servers in the same cluster resetting together randomly.
They are located in 2 data centers and in 4 different racks,
We find nothing in log (except a few log in 1 or 2 servers reporting lost of nodes in the cluster).
Servers seems reset and boot normally.
This happens 3...
On proxmox 4 it's work easy to add osd on ZFS
On proxmox 5 we still have issue due of pveceph not pass correctly paramater (seems)
On your setup you will just have 1 disk for boot, i think you need at least Raid 1.
Advantage to use ZFS with CEPH :
. When you need replication on 2 sites...
not true it's much depend if you use bluestore or filestore
ceph dot com/community/new-luminous-bluestore/
lab.piszki dot pl/installing-a-ceph-jewel-cluster-on-ubuntu-lts-16-using-zfs/
kernelpanik dot net/running-ceph-on-zfs/
I try to setup ZFS / CEPH on 4.4 and 5 but failed on creation OSD.
Generally Ceph hangout after ceph-disk
trying something like this :
ceph-disk prepare --zap-disk --fs-type xfs --cluster ceph --cluster-uuid FSID /dev/zd0
I find this thread due of bluestore in luminous use it as...