okay.. so minimum using 3 node.. and maximum node that can be down is only 1..
because we use pool size 3/2
if we have 4 node.. can 2 node down?
also if 1 node down,..all vm which is running under that node will be transferred to another running node.. and those vm is down and up booting...
how about if we dont set ratio 1 ? also same behavior right? they will use all entire space in the cluster for 1 pool..
Also in the cephFS, there are 2 pool that serve 1 cephFS that is cephfs_data and cephfs_metadata
cephfs_data will contain many file there.. but cephfs_metadata is only save...
Osd type only show ssd,hdd and nvme.
How about if we want to combine hdd and ssd into 1 pool? Different osd type but create into 1 pool. How do we achieve that?
do u have calculator for size/min ?
like 3/2 will be around 33% of total size..
also how about pg_autoscale mode? seems default is YES.. so best way is using YES mode?
i am trying ceph latest version under proxmox 7.1 under virtualbox. we use 3 node
root@ceph1:~# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 44 GiB 41 GiB 3.0 GiB 3.0 GiB 6.75
ssd 30 GiB 28 GiB 1.8 GiB 1.8 GiB 6.07
TOTAL 74 GiB...
xxx@pc:~$ sudo mount -t ceph foo@8f09496e-1dbb-4f6e-87e9-f0d62eddedd1.cephfs3=/ /mnt/mycephfs/ -o secret=AQDAScxhTLIIBRAAkf3mzFh+MaIdXXyBV5fg6Q==
source mount path was not specified
failed to resolve source
xxx@pc:~$
xxx@pc:~$
xxx@pc:~$ sudo mount -t ceph...
hello
we have successfully installed ceph for 3 nodes..
anyway we also create cephFS. but how do we mount cephFS to client node? any example?
i have read this article as well https://docs.ceph.com/en/latest/cephfs/mount-using-kernel-driver/
but still not clear where secret key taken from..
any...