We are planning a new dedicated CEPH cluster + Proxmox cluster (and a Kubernetes cluster in VM) for our clients, who want to switch from regular iSCSI + vmware.
For compute nodes (proxmox) we can reuse the currently used Supermicro superservers and twin servers, ther is no issue with...
I would like to mount an external CephFS, where the pool name is not cephfs.
With this command, i can mount it manually without any issue:
mount -t ceph MDSIP:6789:/ /mnt/test/cephfs -o mds_namespace=proxmoxfs -o name=proxmoxfs,secret=thesecrec
As you can see, the auth username is...
I not fully agree with that. For this kind of money what these parts price (two head node, one jbob, HBA and sas cables), so for the same money yur ceph cluster will be useless, or more less power.
For small cluster, where you have proxmox and vmware also (eq. 2 proxmox and one esxi or...
I surfed on the web, and i found this interesting project: https://github.com/ewwhite/zfs-ha/wiki
Basically this guy build a redundant ZFS based NFS (because he used vmware and he loves nfs better than iscsi) using two head node server and one (or more) JBOD storage box via SAS.
On a CentOS 7 VM with CEPH backend, i can run fstrim inside the VPS without issue:
~ # time fstrim -a -v
/data: 0 B (0 bytes) trimmed
/boot: 817.9 MiB (857587712 bytes) trimmed
/: 13 GiB (13946646528 bytes) trimmed
I try to do this...
Not automatically, you need to migrate VM with yourself or via script, if you want online migration.
HA stuff monitoring VMs, and when it offline, it start it on other nodes. But this part doesnt know if the VM will be offline because the node itself shuting down.