migration path from ZFS via iSCSI to ceph?

udotirol

Well-Known Member
Mar 9, 2018
61
20
48
53
When it comes to storage, we have been using ZFS over iSCSI in our clusters for years.

Now for a couple of new projects, we require S3 compatible storage and I am unsure about the best way to handle this situation. I am tempted to use MinIO, but I've read mixed reviews about it and Ceph seems to be very well integrated into Proxmox VE.

What do you think, what is the best way to migrate our infrastructue from ZFS over iSCSI to Ceph? Is that even possible within the same cluster or would it be better to create new clusters and somehow import the existing (many) VMs into them? If I can, I would like to avoid having both ZFS via iSCI and Ceph available, both for hardware cost and also maintenance efforts.

Thanks in advance
 
There are a few important things missing from your description:
- Is this a business application or a home/lab?
- What is the usable capacity required now and what do you expect growth to be?
- What are the performance expectations from the storage?

Beyond that, while you may be well aware of Ceph architecture - it may be beneficial to establish a baseline. Ceph is mostly used as a hyper-converged storage, allowing you to utilize the same hardware for both Compute and Storage. While this approach works for many people, its not for everyone.

As you mentioned there are more than one solution: converge Ceph on existing Compute, build a completely new PVE/Ceph cluster, build just a Ceph cluster.

You are currently using, presumably, external storage via ZFS/iSCSI plugin. You mentioned that the setup has worked for you for a long time, which suggests the hardware may be not the most recent. For a converged approach on existing hardware:
- Do you have enough servers in the cluster to create a supported Ceph configuration (3+)?
- Can the servers accommodate modern storage? NVMe are highly recommended. For a business environment it does not make sense to build with anything else.
- Do you have sufficient network bandwidth for Ceph replication? A dedicated network is highly recommended. Can you add additional cards?
- Considering Ceph replication, can you stuff your servers with enough capacity to accommodate your needs?
- Are the CPU and RAM in existing servers sufficient to satisfy both your compute requirements and added storage processing load if you go converged?

The next approach is to build a brand new PVE converged infrastructure in parallel to existing one. There are many threads in the forum about hardware recommendations, as well as a PVE produced paper that can be found on Proxmox website. When you are done, backup/restore would be the easiest migration path for you as it stands today. There are variations to this approach, i.e. expanding existing PVE cluster and use PVE "move storage" functionality to migrate.

Only you can decide what is the right investment to make. If yours is a commercial application - feel free to reach out to us for alternative storage solution.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Now for a couple of new projects, we require S3 compatible storage and I am unsure about the best way to handle this situation. I am tempted to use MinIO, but I've read mixed reviews about it and Ceph seems to be very well integrated into Proxmox VE.
the ceph implementation in proxmox lacks all tools for object store deployment or management. you'd be better off deploying ceph with virtual machines acting as the rgw heads.
What do you think, what is the best way to migrate our infrastructue from ZFS over iSCSI to Ceph?
@bbgeek17 already asked most relevant questions, but in general its pretty simple.
1. stand up new ceph cluster
2. attach the new rbd pool to your datacenter
3. migrate the virtual disks from the old store to the new store. see https://pve.proxmox.com/wiki/Storage_Migration

If I can, I would like to avoid having both ZFS via iSCI and Ceph available, both for hardware cost and also maintenance efforts.
Once you migrated all relevant data off the old san, nothing is requiring you to keep it :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!