Ceph and ZFS, with mixed disks and 1Gb interconnects

gowger

Member
Jan 30, 2019
21
0
21
112
Hi all,

Been playing with proxmox installs via pxe booting, and setting up new hardware which is a dell c6100 with each node having a single SSD + 4 SAS spinning rust each.

It's a budget setup with standard dual port 1Gb ethernet cards. I'm curious as to the best setup here, given that I can't hope for much for much performance from ceph with that network, and it is advised to use raw disks for ceph.

Should I install to ZFS pools on the spinning rust and later configure CEPH to use the SSDs? Ideally I'd like to have two ceph pools, fast and slow. But maybe the network bottleneck will mean that it will never be better than slow anyway.

Or would it be more sensible to use SSDs for proxmox install and have CEPH only provide a glacial backup pool?

Maybe there are some choices I have not considered such as partitioning the SSDs and splitting between ZFS and CEPH as it probably takes quite a lot to choke up the SSD bandwidth. What kind of problems would this cause?

The future load will be kubernetes clusters with web applications with clustered databases, message queues etc. Priority is reliability and redundancy.

Appreciate any advice here
 
The future load will be kubernetes clusters with web applications with clustered databases, message queues etc. Priority is reliability and redundancy.

You mean running on your PVE as your IaaS? That should work.

Appreciate any advice here

Why is there ZFS involved in a cluster? It does not mix well in such a setup with a shared storage.

For best performance (in this test setup), I'd go with small partition on the SSD for PVE, the rest for the CEPH OSDs and all disks to ceph. I'd use LACP network bonding if you're switch supports this. If you go into production, you should have at least a spare SSD and a good PVE disk backup, yet I strongly recommend to have two SSDs for this setup.
 
  • Like
Reactions: gowger
Thank you for the advice. Yes PVE as IaaS is the structure I'm going for, for VM level isolation of clusters and high availability of kubernetes nodes.

I was seeing ZFS as just a reliable base layer for local storage on the hardware nodes, that would not be part of distributed storage. I was trying to evaluate which way around makes most sense, using the spinners for distributed storage and SSDs for local, or vice versa, or if it's possible to have both.

So like you say it seems ZFS doesn't have a role in this scenario, especially considering the new bluestore back end.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!