Hi, I know this is not directly related to Proxmox rather than to Ceph, but I'll give it a try.
In my home lab I have 1 Proxmox node (7.latest), and configured Proxmox ceph (Pacific) on it with 2 OSD
I have 2 more non Proxmox hosts I would like to install ceph (?cephadm?) on and add those to the proxmox ceph
Testing this on VMS for now to play with:
Installing cephadm on the Proxmox host did break ceph there sufficiently. (I saw a post here about symlinks that I overwrote).
Trying to manually add a node with the ceph.conf is kind of not working. I seem to miss something very important. Thats why I thought cephadm is for dummies.
Has anyone done that before and has some tips where to start with. Are there any proxmox scripts that I could copy over to manage non Proxmox ceph nodes?
Or should I uninstall Proxmox Ceph and only run cephadm on all hosts?
https://forum.proxmox.com/threads/r...ust go to a nodes,ceph packages for each node.
If anyone wonders about the use case beside of "because I can (or not)"
1) One Host running Proxmox , 2x 1TB HDD spare to run Ceph on for future shared pool between hypervisors (cephfs)
2) Another host is running Debian bullseye with OpenMediavault and Opennebula-node, 2x 1TB HDD spare to run Ceph on
3) Another host running XCP-NG (Centos) with another 2x 1TB HDD spare for ceph.
In my mind maybe even along with a a container on my NAS and a few TB from there as vdisks I have a mix of an additional backup + a shared pool rather than each Hypervisor running isolated by itself.
Again, purely a Lab environment and not for any sort of Production.
Any ideas appreciated.
Thanks Andreas
In my home lab I have 1 Proxmox node (7.latest), and configured Proxmox ceph (Pacific) on it with 2 OSD
I have 2 more non Proxmox hosts I would like to install ceph (?cephadm?) on and add those to the proxmox ceph
Testing this on VMS for now to play with:
Installing cephadm on the Proxmox host did break ceph there sufficiently. (I saw a post here about symlinks that I overwrote).
Trying to manually add a node with the ceph.conf is kind of not working. I seem to miss something very important. Thats why I thought cephadm is for dummies.
Has anyone done that before and has some tips where to start with. Are there any proxmox scripts that I could copy over to manage non Proxmox ceph nodes?
Or should I uninstall Proxmox Ceph and only run cephadm on all hosts?
https://forum.proxmox.com/threads/r...ust go to a nodes,ceph packages for each node.
If anyone wonders about the use case beside of "because I can (or not)"
1) One Host running Proxmox , 2x 1TB HDD spare to run Ceph on for future shared pool between hypervisors (cephfs)
2) Another host is running Debian bullseye with OpenMediavault and Opennebula-node, 2x 1TB HDD spare to run Ceph on
3) Another host running XCP-NG (Centos) with another 2x 1TB HDD spare for ceph.
In my mind maybe even along with a a container on my NAS and a few TB from there as vdisks I have a mix of an additional backup + a shared pool rather than each Hypervisor running isolated by itself.
Again, purely a Lab environment and not for any sort of Production.
Any ideas appreciated.
Thanks Andreas