Localstorage to ceph migration

jsterr

Renowned Member
Jul 24, 2020
784
220
68
32
We have one proxmox single node, with local storage. We wanna buy 2 more servers and make ceph.
What are the correct steps to do this?

1. Build corosync-cluster network?
3. Install ceph on node 2 and node 3?
4. Migrate from node-1 (local storage) to node-2 oder node-3 (does this work, because ceph only has 2 nodes)?
5. Shred disks, install ceph and make osds on node-1?

Does this work? Or is there something I am missing? Doable without downtime?
Thanks.
 
Last edited:
You need at least 3 Ceph nodes.
 
You need at least 3 Ceph nodes.
Yes we currently have one and buy two more, thats three in total. So my question is: how can I get VMs from a Single Node (local) storage to ceph cluster that only has 2 nodes and gets a third after vms are migrated?
 
1 -> Build corosync Cluster Network
2 -> Install ceph on all Nodes!
3 -> create Mon's (I would recommend them on all 3 nodes)
4 -> Create OSD's on node 2 + 3 (you get degraded PG's as long as node 1 has no OSD's, but thats ok for migration time)
5 -> Create Pool for Images
6 -> Migrate VM Storage live into CEPH (there is an "Move Disk" Button )
7 -> shred disks and build osd's on Node 1

-> before you try it:
-> do not use spinning disks for OSD's! they are much to slow for VM Images in terms of latency
-> use server grade SSD's or NVME for OSD's
-> use at least a separate Back End network for CEPH with at least 10 GBit/s !
-> separate Frontend Network (separate form VM traffic) will be ideal -> at least 10 GBit/s !

if you are careful it should be possible without downtime, but think carefully about the capabilities of Node 1 ! You will not be lucky with 1GBit/s Network and you will not be lucky with spinning disks!
 
Last edited:
  • Like
Reactions: jsterr
Some additions to @Klaus Steinberger 's walk-through.
3 -> create Mon's (I would recommend them on all 3 nodes)
To emphasize, you must have 3x MONs.

4 -> Create OSD's on node 2 + 3 (you get degraded PG's as long as node 1 has no OSD's, but thats ok for migration time)
Better create a pool with 2/2 (size/min_size), no degraded PG's (if not stuck at creation) for the migration and write IO will cease if less than 2 replica are available, keeping your data safe. Afterwards the replica size can be adjusted to 3/2. Allowing to continue with a missing copy.

-> use at least a separate Back End network for CEPH with at least 10 GBit/s !
-> separate Frontend Network (separate form VM traffic) will be ideal -> at least 10 GBit/s !
Separate Ceph traffic from any other traffic. Especially from Coroync's traffic. If you further split Ceph's public/cluster network is up to your liking/needs.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!