Ceph performance with Proxmox on external drives! (EDIT 3 nodes)

Dirky_uk

New Member
Mar 16, 2024
15
1
3
Don't laugh....
I have a Mac mini from 2012, core i7, 16gb, nothing too fancy. I use the Mac (MacOS) with an attached Drobo 5D and Thunderbolt 1. I run Plex on the Mac and some of the arrr services. Only maybe 2 streams max watching Plex at a time.

My plan is to move this to a there node proxmox cluster I'm thinking of using a the mini PC's from minis-forum, 12th gen i7 maybe, 32GB ram

I'm wondering how some USB 3.1 drives, like 16TB external will perform with Ceph on this small cluster?
I'll be running a few other VM's but mostly for testing things, docker etc.

I realise its a bit of a broad question, I cant easily upgrade the rest of the house lan to 2.5GB but I guess I could get a small 2.5G switch to put the 2 promox boxes on, however I assume the USB 3.1

Thanks for any pointers!
 
Last edited:
I wanted to experiment with Ceph before moving to a bunch of new hardware later and also allow me to plan what i wanted/needed for new hardware. I have setup 2 separate clusters using Proxmox and Ceph and while it is not perfect or even close to what you would do in a "real production" environment it does work and Ceph has handled things brilliantly and so far (knock on wood) been keeping my data safe.

The first cluster is a group of 7 nodes where unfortunately due to hardware limitations (cannot put the RAID card into IT/HBA mode) I had to use external drives for my Ceph OSDs. Each of the 7 nodes has 2 SSDs attached via a USB to SATA adapter and each node has two 1 GB NICs for the use by Ceph and Proxmox (though I have created a number of VLANs to separate things out). On this cluster I get about the speed of a single HDD when writing to the cluster though I have not noticed any difference between that and when I used to store VM disks on a single NAS using NFS.

The second cluster is for my Docker volumes and is a cluster 3 node each with 4 HDDs inside. This time I was able to put most of the OSD drives into the system and I use 2 USB to SATA adapters for the OS installation. They also have two 1 GB NICs on each node. This cluster is a little but more finicky than the other with 2 OSDs on 2 of the nodes being on USB adapters as the systems would not boot from the SSDs connected though the USB adapter but would boot from a USB flash drive.

All in all I am happy with my move so far and will be hoping to move into a better hardware setup in the future and remove some of the jankyness of this setup. I will probably replace nodes in my main cluster first for custom built 4U system or possibly Dell R730 units before moving onto the other smaller cluster. My end goal is to move to a 5 node main cluster for running VMs, which i call my compute cluster. Then move into a 7 node storage cluster and remove the 3 node Ceph cluster as well as my other 3 NAS boxes that are running as individual storage locations.
 
  • Like
Reactions: Dirky_uk
I wanted to experiment with Ceph before moving to a bunch of new hardware later and also allow me to plan what i wanted/needed for new hardware. I have setup 2 separate clusters using Proxmox and Ceph and while it is not perfect or even close to what you would do in a "real production" environment it does work and Ceph has handled things brilliantly and so far (knock on wood) been keeping my data safe.

The first cluster is a group of 7 nodes where unfortunately due to hardware limitations (cannot put the RAID card into IT/HBA mode) I had to use external drives for my Ceph OSDs. Each of the 7 nodes has 2 SSDs attached via a USB to SATA adapter and each node has two 1 GB NICs for the use by Ceph and Proxmox (though I have created a number of VLANs to separate things out). On this cluster I get about the speed of a single HDD when writing to the cluster though I have not noticed any difference between that and when I used to store VM disks on a single NAS using NFS.

The second cluster is for my Docker volumes and is a cluster 3 node each with 4 HDDs inside. This time I was able to put most of the OSD drives into the system and I use 2 USB to SATA adapters for the OS installation. They also have two 1 GB NICs on each node. This cluster is a little but more finicky than the other with 2 OSDs on 2 of the nodes being on USB adapters as the systems would not boot from the SSDs connected though the USB adapter but would boot from a USB flash drive.

All in all I am happy with my move so far and will be hoping to move into a better hardware setup in the future and remove some of the jankyness of this setup. I will probably replace nodes in my main cluster first for custom built 4U system or possibly Dell R730 units before moving onto the other smaller cluster. My end goal is to move to a 5 node main cluster for running VMs, which i call my compute cluster. Then move into a 7 node storage cluster and remove the 3 node Ceph cluster as well as my other 3 NAS boxes that are running as individual storage locations.
Thanks the the information. It’s appreciated.
 
Not laughing as my homelab currently consists of 5 Mac Minis circa 2012 in a proxmox cluster with ceph, and has been know to be degraded down to 2 nodes over the last few weeks. Unlike you I'm not running Plex or using it as a media server yet, for me it's just my home lab for experimenting. I haven't actually tried any performance checking as I'm aware that a ceph cluster with 3 USB3 connected OSD's on aged hardware wouldn't be great.

Recently one of the original 3 nodes I had running failed and my ceph cluster droped down to just 2 nodes with a single OSD each. As it's a home lab, is only used for testing and learning, I left it far too longer before getting a chance to play with it and add in the additional nodes I hadn't yet got to. Ceph has been a champ and ran through 3 weeks in a degraded state with me thinking that it would die at any moment and I would loose the data, but it kept going until I had the time to repair it. Now that I have it working, and am about to add another couple of OSD's to it, I'll consolidate it all into the one rack as it's currently spread across the house in a mess of newworking and storage. Once its consolidated on the same switch I might give it a bit of a performance test and let you know what I find, though I'm not expecting much from it. ;)