Ceph GUI improvements on Roadmap - what are these?

victorhooi

Well-Known Member
Apr 3, 2018
253
20
58
38
Hi,

I noticed that the Proxmox Roadmap mentions "Ceph GUI Improvements".

This was actually there before Proxmox 5.2 came out - but it seems to have been moved to Proxmox 5.3 now.

Just curious - what exactly are these Ceph GUI improvements?

I'm looking at setting up a new Ceph cluster for a 3-node Proxmox HA setup in the near-term, and honestly, am super confused =(. Would be great if the new GUI stuff made things easier. (E.g. the new cluster management GUI stuff in 5.2 is great).

Also - read that Ceph Mimic won't be coming to Proxmox till late 2019 - bummer =(.

Thanks,
Victor
 
What do see as super confusing?
 
The networking is where I'm getting tripped up.

I have 3 servers each with Proxmox 5.2 installed. This is already in a Proxmox cluster.

They each have a Crucial MX500 SSD for the OS install - and then a Optane 900p SSD for the Ceph storage.

Each server has a Intel X520-DA2 network card installed. I have two DAC cables going from each server to a 10GbE switch.

The idea was to use one DAC cable for the Ceph traffic, and the other DAC cable for everything else (e.g. Proxmox management network.)

However, I'm not sure how to set this up. I'm been reading through a few articles and I'm still not sure on the best way, or how to start.

https://pve.proxmox.com/wiki/Ceph_Server
https://pve.proxmox.com/wiki/Manage_Ceph_Services_on_Proxmox_VE_Nodes
https://www.proxmox.com/en/training/video-tutorials/item/install-ceph-server-on-proxmox-ve

The actual mainboard itself also has 1GbE NICs - I suppose I could use them as well?
 
The actual mainboard itself also has 1GbE NICs - I suppose I could use them as well?
To start from the bottom, I strongly suggest that you use the 1GbE for your corosync traffic, otherwise the other traffic on the interface will interfere with corosync and in worst case reset your cluster nodes. Also preferably a second corosync ring for redundancy.

The idea was to use one DAC cable for the Ceph traffic, and the other DAC cable for everything else (e.g. Proxmox management network.)
The separation of storage traffic is a must, as Ceph will max out the 10GbE (especially on recovery) if your combined disk throughput is bigger then the 10GbE.

But where do you see the confusion in the GUI? The network section in the GUI configures the '/etce/network/interfaces' file on the system. You can see this when you review the configuration change.

Try our documentation, it includes the setup of Ceph on a hyper-converged cluster. https://pve.proxmox.com/pve-docs/
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!