Hi,
I'm playing with a small home lab. I would like to use ceph to ensure that I have replicated vm's for reduncancy.
I have 2 servers , each with 1 HDD.
The first server (HP microserver) with 16gb ram has been running Proxmox for years, hand cranked from Debian running LVM on a 500G SATA HDD
The second server is a Lenovo Xeon Workstation with 24gb ram which is clean install 7.2 with ZFS on a 1TB SATA HDD
Both are connected to a gb managed switch
I assumed that I could just resize the LVM, drop in a ZFS partition and get sharing (which I tried), but after some reading I found that only ZFS over ISCSI and CEPH support replication that I want.
So on the HP microserver I destroyed the ZFS partition and started making CEPH OSD storage.
At this point I'm feeling out of my depth. I need to resize the ZFS on the Lenovo to add Ceph storage. But I am fed up of going in circles. Should I drop this as an endeavour ? I've learnt a bit about ZFS, but I have no desire to run it over ISCSI and I don't have a SAN and Ceph seems more suited to converged CPU/Storage Virtualisation.
Has anyone done this please ?
My idea is that on each server I have a 200GB Proxmox OS partition and a 300GB CEPH data store for VM's. I know the set up is far from ideal; one low power cpu, one high, no dedicated cluster network, but each server has a reasonable amount of RAM and it is neither production, nor high workload.
Any input would be appreciated.
Cheers
I'm playing with a small home lab. I would like to use ceph to ensure that I have replicated vm's for reduncancy.
I have 2 servers , each with 1 HDD.
The first server (HP microserver) with 16gb ram has been running Proxmox for years, hand cranked from Debian running LVM on a 500G SATA HDD
The second server is a Lenovo Xeon Workstation with 24gb ram which is clean install 7.2 with ZFS on a 1TB SATA HDD
Both are connected to a gb managed switch
I assumed that I could just resize the LVM, drop in a ZFS partition and get sharing (which I tried), but after some reading I found that only ZFS over ISCSI and CEPH support replication that I want.
So on the HP microserver I destroyed the ZFS partition and started making CEPH OSD storage.
At this point I'm feeling out of my depth. I need to resize the ZFS on the Lenovo to add Ceph storage. But I am fed up of going in circles. Should I drop this as an endeavour ? I've learnt a bit about ZFS, but I have no desire to run it over ISCSI and I don't have a SAN and Ceph seems more suited to converged CPU/Storage Virtualisation.
Has anyone done this please ?
My idea is that on each server I have a 200GB Proxmox OS partition and a 300GB CEPH data store for VM's. I know the set up is far from ideal; one low power cpu, one high, no dedicated cluster network, but each server has a reasonable amount of RAM and it is neither production, nor high workload.
Any input would be appreciated.
Cheers