Ceph performance with Proxmox on external drives! (EDIT 3 nodes)

Dirky_uk

New Member
Mar 16, 2024
15
1
3
Don't laugh....
I have a Mac mini from 2012, core i7, 16gb, nothing too fancy. I use the Mac (MacOS) with an attached Drobo 5D and Thunderbolt 1. I run Plex on the Mac and some of the arrr services. Only maybe 2 streams max watching Plex at a time.

My plan is to move this to a there node proxmox cluster I'm thinking of using a the mini PC's from minis-forum, 12th gen i7 maybe, 32GB ram

I'm wondering how some USB 3.1 drives, like 16TB external will perform with Ceph on this small cluster?
I'll be running a few other VM's but mostly for testing things, docker etc.

I realise its a bit of a broad question, I cant easily upgrade the rest of the house lan to 2.5GB but I guess I could get a small 2.5G switch to put the 2 promox boxes on, however I assume the USB 3.1

Thanks for any pointers!
 
Last edited:
I wanted to experiment with Ceph before moving to a bunch of new hardware later and also allow me to plan what i wanted/needed for new hardware. I have setup 2 separate clusters using Proxmox and Ceph and while it is not perfect or even close to what you would do in a "real production" environment it does work and Ceph has handled things brilliantly and so far (knock on wood) been keeping my data safe.

The first cluster is a group of 7 nodes where unfortunately due to hardware limitations (cannot put the RAID card into IT/HBA mode) I had to use external drives for my Ceph OSDs. Each of the 7 nodes has 2 SSDs attached via a USB to SATA adapter and each node has two 1 GB NICs for the use by Ceph and Proxmox (though I have created a number of VLANs to separate things out). On this cluster I get about the speed of a single HDD when writing to the cluster though I have not noticed any difference between that and when I used to store VM disks on a single NAS using NFS.

The second cluster is for my Docker volumes and is a cluster 3 node each with 4 HDDs inside. This time I was able to put most of the OSD drives into the system and I use 2 USB to SATA adapters for the OS installation. They also have two 1 GB NICs on each node. This cluster is a little but more finicky than the other with 2 OSDs on 2 of the nodes being on USB adapters as the systems would not boot from the SSDs connected though the USB adapter but would boot from a USB flash drive.

All in all I am happy with my move so far and will be hoping to move into a better hardware setup in the future and remove some of the jankyness of this setup. I will probably replace nodes in my main cluster first for custom built 4U system or possibly Dell R730 units before moving onto the other smaller cluster. My end goal is to move to a 5 node main cluster for running VMs, which i call my compute cluster. Then move into a 7 node storage cluster and remove the 3 node Ceph cluster as well as my other 3 NAS boxes that are running as individual storage locations.
 
  • Like
Reactions: Dirky_uk
I wanted to experiment with Ceph before moving to a bunch of new hardware later and also allow me to plan what i wanted/needed for new hardware. I have setup 2 separate clusters using Proxmox and Ceph and while it is not perfect or even close to what you would do in a "real production" environment it does work and Ceph has handled things brilliantly and so far (knock on wood) been keeping my data safe.

The first cluster is a group of 7 nodes where unfortunately due to hardware limitations (cannot put the RAID card into IT/HBA mode) I had to use external drives for my Ceph OSDs. Each of the 7 nodes has 2 SSDs attached via a USB to SATA adapter and each node has two 1 GB NICs for the use by Ceph and Proxmox (though I have created a number of VLANs to separate things out). On this cluster I get about the speed of a single HDD when writing to the cluster though I have not noticed any difference between that and when I used to store VM disks on a single NAS using NFS.

The second cluster is for my Docker volumes and is a cluster 3 node each with 4 HDDs inside. This time I was able to put most of the OSD drives into the system and I use 2 USB to SATA adapters for the OS installation. They also have two 1 GB NICs on each node. This cluster is a little but more finicky than the other with 2 OSDs on 2 of the nodes being on USB adapters as the systems would not boot from the SSDs connected though the USB adapter but would boot from a USB flash drive.

All in all I am happy with my move so far and will be hoping to move into a better hardware setup in the future and remove some of the jankyness of this setup. I will probably replace nodes in my main cluster first for custom built 4U system or possibly Dell R730 units before moving onto the other smaller cluster. My end goal is to move to a 5 node main cluster for running VMs, which i call my compute cluster. Then move into a 7 node storage cluster and remove the 3 node Ceph cluster as well as my other 3 NAS boxes that are running as individual storage locations.
Thanks the the information. It’s appreciated.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!