Moving from RAID to Ceph in 3 node cluster - will this work?

Dolomike

Active Member
Dec 13, 2014
11
0
41
We'd like to get away from hardware RAID and move into distributed storage. I've been reading a number of posts regarding 3-node clusters with Ceph, how it's not ideal but I see that many people have implemented it without any issues and we'll add a 4th node sometime in the future, but I'd like to make sure we're approaching this the right way.

We're not looking for a full high-availability fault tolerant system for now, but it would be nice to have a system where we could distribute our services, take a server down for physical maintenance, upgrades, or due to failures and not have some of our services be unavailable during that time and not have to migrate each time. I am aware of the risk when only two servers are available in the pool, and this will be mitigated when the 4th node is added, and we have full backup systems in place should a disaster take place, but the idea of being able to build out our systems as we grow and how Ceph can easily help us do that is very appealing.

Currently each server is 20-cores with 48GB+ of RAM, enterprise SSD's for CT's/VM's and OSD's, and spinners for bulk storage, 1x10GBe port and 2 or 4 1GBe ports (will likely upgrade all to have 4 1GBe ports). The 1GBe ports are bonded and have VLAN aware bridges. The Proxmox hosts are part of a management VLAN, and serve a number of Linux Containers and a couple Windows VM's on two other VLAN's. Likely 2-3 OSD's on each server to start.

1674937444312.png

The image below shows the proposed network setup. There are also a few local workstations in the mix.

1674938326589.png

What's the idea network setup for this? I'm a bit confused as to how to create the Ceph networks and what networks to connect them to especially with all the VLAN's we have setup. Should the Ceph public and private both be on the 10GBe network? Should I create a separate Crososync VLAN as well?

Thanks for your help!
 
What's the idea network setup for this? I'm a bit confused as to how to create the Ceph networks and what networks to connect them to especially with all the VLAN's we have setup. Should the Ceph public and private both be on the 10GBe network? Should I create a separate Crososync VLAN as well?
Creating Ceph networks is simple: define first the interfaces with IP addresses in GUi, afterwards specify these addresses in WEB_GUI when configuring Ceph. AFAIU you have just one 10GB interface (even in the digram it looks like meshed network; in the latter case some adaptions in network configuration file would be necessary).
Seperate rather corosync cluster network from management network physically than bond the interfaces and create vlans.
 
It's considered best practice to separate Corosync, Ceph Public and Private to separate networking infrastructure.

I didn't setup my 5-node cluster this way but have a primary 10GbE active setup in a fault-tolerance mode. There's a second NIC on standby to step-in if the primary fails. It's working fine.

I use the following optimizations to max out disk & network IOPS learned through trial-and-error:

Set write cache enable (WCE) to 1 on SAS drives (sdparm -s WCE=1 -S /dev/sd[x])
Set VM cache to none
Set VM to use VirtIO-single SCSI controller and enable IO thread and discard option
Set VM CPU type to 'host'
Set VM CPU NUMA if server has 2 or more physical CPU sockets
Set VM VirtIO Multiqueue to number of cores/vCPUs
Set VM to have qemu-guest-agent software installed
Set Linux VMs IO scheduler to none/noop
Set RBD pool to use the 'krbd' option if using Ceph
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!