Which Shared Storage for 2 node cluster

Dec 19, 2020
67
4
13
48
We have a 2 node cluster running for 3 years now, currently with Local Storage only. (SSD + HDD with ZFS RAID1 + RAID10).
This is a very stable production environment! There were no issues at all... we started with version 6, all updates went through smoothly until version 8.
No crashes or any problems at all.
We currently use ZFS Replication for the most important VMs on a separate 10 GBIt network
But now I would like to add Shared Storage! But I really can not find a decision, which way to go…
  1. Buy a 3rd node and use Ceph?
    We don’t need more resources in CPU, RAM and disk space. Only buy a third node because of using CEPH?
    How does migration from ZFS to CEPH work? Should I still setup the disks as a mirror? Or do I only need single disks for CEPH?
    What about boot volumes? Currently running on ZFS SSD in RAID1.

  2. Use Linstor/DRBD?
    Sounds like the best solution to me, but support from Linstor is too expensive for us.
    And I also do not find too many users for community support here…
    Proxmox does really not support this setup?
  1. Buy a separate Storage System?
    Which technology to use? NFS or iSCSI? Synology or TrueNAS with Supermicro Hardware?
    But this will be a Single-Point-of-Failure if not mirrored… and mirrored will also be expensive…

Which way would you go? And why?
 
You should procure a 3rd node and deploy built-in Ceph. That would be the most simple approach. It would also bring your cluster up to a properly supported/recommended number of PVE members.
We don’t need more resources in CPU, RAM and disk space. Only buy a third node because of using CEPH?
Nobody can answer that, because you did not provide any information about your CPU, RAM or details about disk space.
You will likely need additional disks in all nodes, potentially additional network connectivity, etc. If you are at limit or oversubscribed on CPU and RAM, then you will need more. Ceph is a Hyper-converged solution and runs on the same hardware as your Virtualization. It will compete for resources with your VMs.
How does migration from ZFS to CEPH work?
You would use PVE option of "move disk" to new Storage Object. Until migration is done you will have both pools of storage active simultaneously.
Mirroring terminology is not applicable to Ceph.
What about boot volumes? Currently running on ZFS SSD in RAID1.
This will not change

Prior to deploying Ceph you should go through the many guides (both printed and video) to understand this technology. Its radically different from the current local storage you are used to.
Buy a separate Storage System?
Which technology to use? NFS or iSCSI? Synology or TrueNAS with Supermicro Hardware?
But this will be a Single-Point-of-Failure if not mirrored… and mirrored will also be expensive…
This is an option that allows you to leave everything you have now in place and not change existing nodes.
The answer to NFS vs iSCSI is "depends". It depends on many factors, including: your familiarity with technology, understanding the limitations of each, appropriate resources, etc.
If you need HA on the Storage/NAS side then you should buy a dual controller storage.

You can also build everything "for free" - after all the Synology and TrueNAS's of the world simply aggregate Open Source technologies into a nice manageable package for the masses.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Thank you for you detailed answer.
Both nodes currently have 128 GB RAM, one SSD RAID (1-2 TB), one HDD RAID (2-4 TB) and a single CPU (Intel XEON 4110 and 4215R).
I have 4 LXC and 7 VMs running.

Ok, I will keep the RAID-1 SSD as booting devices and I will have add more disks for CEPH.
 
So many questions about using CEPH, despite I am reading so many docs.

How do I need to distribute the disks?
I thought about having 2 HDDs and only 1 SSD in each node. Will this be too less?

Is one seperate 10 GBit network enough for ceph? Or do I really a second network for CEPH? I read about private and public...
Of course I have an extra 1GBit network for normal LAN traffc...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!