How to switch Ceph to Proxmox VE Ceph?

H.c.K

Member
Oct 16, 2019
67
3
13
31
Hi,
We have a Ceph structure in our company. Our friend, who was interested in the system, left our company. We have 6 Node servers and we have a working front structure.

Below I leave information about a Node. How can I transfer this system to Proxmox VE? Can you show me a way about this?
I am very satisfied with Proxmox systems and I use it successfully in my personal projects. So I want to move this system to Proxmox VE.

[root@host ~]# cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)
[root@host ~]# /bin/ceph --version
ceph version 13.2.7 (71bd687b6e8b9424dd5e5974ed542595d8977416) mimic (stable)
[root@host ~]# ceph tell osd.* version
osd.0: { "version": "ceph version 13.2.7 ( ) mimic (stable)"}
osd.1: { "version": "ceph version 13.2.5 ( ) mimic (stable)"}
osd.2: { "version": "ceph version 13.2.5 ( ) mimic (stable)"}
osd.3: { "version": "ceph version 13.2.7 ( ) mimic (stable)"}
osd.4: { "version": "ceph version 13.2.5 ( ) mimic (stable)"}
osd.5: { "version": "ceph version 13.2.5 ( ) mimic (stable)"}
osd.6: { "version": "ceph version 13.2.7 ( ) mimic (stable)"}
osd.7: { "version": "ceph version 13.2.7 ( ) mimic (stable)"}
osd.8: { "version": "ceph version 13.2.7 ( ) mimic (stable)"}
osd.9: { "version": "ceph version 13.2.7 ( ) mimic (stable)"}
osd.10: { "version": "ceph version 13.2.7 ( ) mimic (stable)"}
osd.11: { "version": "ceph version 13.2.7 ( ) mimic (stable)"}
osd.12: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.13: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.14: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.16: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.17: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.18: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.19: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.20: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.21: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.22: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.23: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.24: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.25: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.26: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.27: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.28: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.29: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.30: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.31: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.32: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.33: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.34: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.35: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.36: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.37: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.38: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.39: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.40: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.41: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.42: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.43: { "version": "ceph version 13.2.6 ( ) mimic (stable)"}
osd.44: { "version": "ceph version 13.2.8 ( ) mimic (stable)"}
osd.45: { "version": "ceph version 13.2.8 ( ) mimic (stable)"}
osd.46: { "version": "ceph version 13.2.8 ( ) mimic (stable)"}
osd.47: { "version": "ceph version 13.2.8 ( ) mimic (stable)"}
osd.48: { "version": "ceph version 13.2.8 ( ) mimic (stable)"}
osd.49: { "version": "ceph version 13.2.8 ( ) mimic (stable)"}
osd.50: { "version": "ceph version 13.2.8 ( ) mimic (stable)"}
osd.51: { "version": "ceph version 13.2.8 ( ) mimic (stable)"}
osd.52: { "version": "ceph version 13.2.8 ( ) mimic (stable)"}
osd.53: { "version": "ceph version 13.2.8 ( ) mimic (stable)"}
 

Alwin

Proxmox Retired Staff
Retired Staff
Aug 1, 2017
4,617
453
88
Below I leave information about a Node. How can I transfer this system to Proxmox VE? Can you show me a way about this?
First off, you will need to upgrade, since Mimic is EoL. Proxmox VE Supports Ceph Nautilus as its default and probably Ceph Octopus soon as well.

Then you will need to reinstall each host (one-by-one) with Proxmox VE and Ceph. Join it into Proxmox VE cluster, once you have a second node. And also move the Ceph services OSD, MON, MGR, per node to Proxmox VE. This way your cluster can grow into a Proxmox VE only Ceph cluster.
 
  • Like
Reactions: H.c.K

H.c.K

Member
Oct 16, 2019
67
3
13
31
First off, you will need to upgrade, since Mimic is EoL. Proxmox VE Supports Ceph Nautilus as its default and probably Ceph Octopus soon as well.

Then you will need to reinstall each host (one-by-one) with Proxmox VE and Ceph. Join it into Proxmox VE cluster, once you have a second node. And also move the Ceph services OSD, MON, MGR, per node to Proxmox VE. This way your cluster can grow into a Proxmox VE only Ceph cluster.
Dear @Alwin;
As far as I understand it will be a process like the one below. If I'm wrong, can you correct me? If I understand correctly, I will have questions in the migration process.


Existing Ceph
Current Ceph structure:

oldhost51oldhost52oldhost53oldhost56oldhost74oldhost81
Ceph version 13.2.7Ceph version 13.2.7Ceph version 13.2.8Ceph version 13.2.7Ceph version 13.2.7Ceph version 13.2.7
CentOS Linux release 7.7.1908CentOS Linux release 7.7.1908CentOS Linux release 7.7.1908CentOS Linux release 7.7.1908Red Hat Enterprise Linux Server release 7.0 (Maipo)CentOS Linux release 7.7.1908
osd.0osd.21osd.44osd.6osd.12osd.27
osd.2osd.22osd.45osd.7osd.13osd.28
osd.3osd.23osd.46osd.8osd.14osd.29
osd.4osd.24osd.47osd.9osd.16osd.30
osd.5osd.25osd.48osd.10osd.17osd.31
osd.26osd.49osd.11osd.18osd.32
osd.51osd.19osd.33
osd.52osd.20osd.34
osd.53osd.35
osd.36
osd.37
osd.38
osd.39
osd.40
osd.41
osd.42
osd.43

Step 1:
  1. Buy 2 new Node for Proxmox VE + Ceph Server.
  2. Install proxmox and front on 2 new Nodes (newpve1 and newpve2).
  3. Set up Proxmox VE Cluster Ceph structure.

Step 2:
  1. Move oldhost51 and oldhost52 to newpve1 and newpve2.
  2. Move oldhost51 and oldhost52 and then reinstall them (newpve3 and newpve4).
  3. Add in Proxmox VE Ceph Cluster. (newpve1, newpve2,newpve3,newpve4)
  4. Move oldhost53 and oldhost56 to newpve3 and newpve4.
  5. Move oldhost53 and oldhost56 and then reinstall them (newpve5 and newpve6).
  6. Add in Proxmox VE Ceph Cluster.(newpve1, newpve2,newpve3,newpve4,newpve5,newpve6)
  7. Move oldhost74 and oldhost81 to newpve5 and newpve6.
  8. Move oldhost74 and oldhost81 and then reinstall them (newpve7 and newpve8).
  9. Add in Proxmox VE Ceph Cluster.(newpve1, newpve2,newpve3,newpve4,newpve5,newpve6,newpve7,newpve8)
 

Alwin

Proxmox Retired Staff
Retired Staff
Aug 1, 2017
4,617
453
88
Existing Ceph
Current Ceph structure:
Try to even out the OSD count on the hosts. It will benefit the data placement and recover/rebalance.

  1. Buy 2 new Node for Proxmox VE + Ceph Server.
Not needed if you can remove an existing node from the cluster.

Set up Proxmox VE Cluster Ceph structure.
With creating a cluster was meant, corosync [0] first, as the ceph.conf will be shared amongst the nodes. You can start with the corosync [0] cluster when you have installed the first node. Then after running pveceph init [1] you copy the ceph.conf from the existing cluster to /etc/pve/.

Once the Proxmox VE node can connect to the existing cluster, you can add the Ceph services [1] to that node.

After the first node is done, you reinstall the other nodes one at a time.

[0] https://pve.proxmox.com/pve-docs/chapter-pvecm.html
[1] https://pve.proxmox.com/pve-docs/chapter-pveceph.html
 
  • Like
Reactions: H.c.K

H.c.K

Member
Oct 16, 2019
67
3
13
31
Try to even out the OSD count on the hosts. It will benefit the data placement and recover/rebalance.


Not needed if you can remove an existing node from the cluster.


With creating a cluster was meant, corosync [0] first, as the ceph.conf will be shared amongst the nodes. You can start with the corosync [0] cluster when you have installed the first node. Then after running pveceph init [1] you copy the ceph.conf from the existing cluster to /etc/pve/.

Once the Proxmox VE node can connect to the existing cluster, you can add the Ceph services [1] to that node.


After the first node is done, you reinstall the other nodes one at a time.

[0] https://pve.proxmox.com/pve-docs/chapter-pvecm.html
[1] https://pve.proxmox.com/pve-docs/chapter-pveceph.html
Hi @Alwin ,
I am using a Proxmox mail gateway cluster structure. I can also install the facade system you mentioned. It would be very comfortable for me if I were installing a zero system.

What really makes me think is that the virtual servers in the currently working Ceph system are moved to Proxmox. This is really hard work. It doesn't need to be done now, but I will need to start this work soon. I will also need to learn about the front system. There are network settings for Ceph servers that exist on the switch. Can you help me with this process if we get paid subscription from you? Can we make a study plan? There are currently 195 active virtual servers.
 

Alwin

Proxmox Retired Staff
Retired Staff
Aug 1, 2017
4,617
453
88
I am using a Proxmox mail gateway cluster structure. I can also install the facade system you mentioned. It would be very comfortable for me if I were installing a zero system.
What does the Proxmox Mail Gateway have to do with Proxmox VE? And it is just a recommendation, not a must.

What really makes me think is that the virtual servers in the currently working Ceph system are moved to Proxmox. This is really hard work. It doesn't need to be done now, but I will need to start this work soon. I will also need to learn about the front system. There are network settings for Ceph servers that exist on the switch. Can you help me with this process if we get paid subscription from you? Can we make a study plan? There are currently 195 active virtual servers.
We can help with concrete questions about your planned migration or the configuration of Proxmox VE + Ceph. But we do not cover network or system planning/design.

For learning Ceph, you best start out with Ceph's architecture guide [0] and our documentation on Ceph [1].

To migrate the VMs to Proxmox VE you may see our documentation [2] and our wiki [3] as well.

[0] https://docs.ceph.com/en/octopus/architecture/
[1] https://pve.proxmox.com/pve-docs/chapter-pveceph.html
[2] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_importing_virtual_machines_and_disk_images
[3] https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!