Proxmox Ceph Performance Setup

g7cloud

Member
Jul 28, 2022
2
0
6
I recently encountered a storage failure and looking to create a highly available storage solution for my VMs however I have a few questions and would love if someone can guide me on this.

I'm looking at:
  • Using 3 X Proxmox servers dedicated to storage only and multiple node servers running VMs from this storage.
  • Use high capacity spinning disks i.e. 4 X 18TB each, in each server
  • 256GB or 512GB Ram in each server
Questions
  • Will Ceph be able to cache frequently accessed data into RAM to provide an almost SSD like performance? hence I'm opting for high RAM on all 3 servers
  • Can I select which server is the primary storage server? i.e. can I have all SSDs in my primary storage server for performance and spinning disks in the other servers? in the event the primary fails everything will just run a bit slow until I can get the SSD storage server back online
  • I heard Ceph supports tiered caching but also heard this has been deprecated?
  • Any other suggestions I haven't thought about would be appreciated.
 
Using 3 X Proxmox servers dedicated to storage only and multiple node servers running VMs from this storage.
While that works, if you have more nodes available you'd be better served with 4-5 OSD nodes.

Use high capacity spinning disks i.e. 4 X 18TB each, in each server
Spinning disks= poor performance. you'd be better off with a MUCH higher count of smaller disks, preferably SSD.

What you didnt mention:

multiple interfaces of high bandwidth/low latency network ports. the faster the better, but ultimately you want to match that performance to how much IO you plan to be pushing through your storage subsystem, and how fast you want to be able to achieve recovery on fault.
 
Networking wise, I use Juniper switches where I'd run the proxmox storage servers on dual QSFP+ 40GB/s network, each server connected to two switches for network redundancy and the nodes likely using 10GB/s connections. Similar to my current setup now on TrueNas and Vmware ESXI.

Spinning disks = poor performance, agreed however all nodes are running web applications. I'd hope with very high RAM majority of the frequently accessed data would all be cached making performance somewhat unnoticeable.
 
Will Ceph be able to cache frequently accessed data into RAM to provide an almost SSD like performance?
No. If you want to make use of your ram in your intended capacity, make sure your applications cache at their level, not at the disk subsystem.

Can I select which server is the primary storage server?
No. Ceph is distributed, there's no "primary" storage server.

heard Ceph supports tiered caching but also heard this has been deprecated?
Correct. It wasnt useful in most cases.

Any other suggestions I haven't thought about would be appreciated.
I guess my first reply was for this question :)
I'd hope with very high RAM majority of the frequently accessed data would all be cached making performance somewhat unnoticeable.
If this is the case, just put them behind cloudflare (or another cdn of your choice) and problem solved.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!