Storage Heavy-Duty Setup

ProxmoxFan2020

New Member
Jul 25, 2020
1
0
1
54
Hi, It's actually a great product.

I'm setting up a new 2 node cluster environment.
It's for small company. They will use 5VMs in each node.

[Node1 & 2]
- Ryzen 3500 or 3600
- 32GB Mem
- 256GB NvME for Win10 Template (& Proxmox system)
- 256GB NvME for 5*Linked-clone VMs
- 512GB SATA SSD Cache for below HDD
- 2TB HDD for File Server & Backup Storage

[Nic]
- 1*On-board Nic
- 4*Expansion Nic (PCIE)
- 2 ports bonding for LAN/WAN, 3 ports for Ceph Corosync

[Rasbpi4 mini NAS]
- 8GB microsd for avoiding Ceph split brain.

Questions:
1. Each VM needs high performance storage (like 1500Mb/s). Will this setup work fine?
2. Can 5-ports bonding substitute expensive 10Gbe expansion? (To the degree of 3Gbps, I mean)
3. I heard 2 Node Cluster needs 3rd place to save corosync DB to avoid split brain. Is this mandatory?

Thanks!
 
3. I heard 2 Node Cluster needs 3rd place to save corosync DB to avoid split brain. Is this mandatory?

Yes

2. Can 5-ports bonding substitute expensive 10Gbe expansion? (To the degree of 3Gbps, I mean)

Depends on the switch infrastructure, but direct cable connection (twinax) with two 10 GBE cards with used hardware is less than 100 euros, sometimes even lower than 50 euros.

1. Each VM needs high performance storage (like 1500Mb/s). Will this setup work fine?

that is a high throughput storage system and I do not know if you meant Mb or MB. Both rely - with on ceph - on the network.
 
Questions:
1. Each VM needs high performance storage (like 1500Mb/s). Will this setup work fine?
if it is Mb/s = >> 187MB/s which is low performance and it is achievable with even SAS

2. Can 5-ports bonding substitute expensive 10Gbe expansion? (To the degree of 3Gbps, I mean)

if you do a LAGG with 5 Ports you will get 5Gbps bandwidth, but how do you plan to connect them, whether it is direct connectivity or via switch
if you are infact looking for 1500MB/s and not 1500Mb/s then required throughput cannot be met with the architecture


3. I heard 2 Node Cluster needs 3rd place to save corosync DB to avoid split brain. Is this mandatory
You can run 2 Node cluster also in proxmox and set two_node: 1 flag to ensure quorum in case of failure of one of the node but it is not advisable

Ceph Cannot run with less than 3 nodes and with 3 Nodes and Number of possible OSD as mentioned, the performance and availability requirements shall not be met
 
- 2 ports bonding for LAN/WAN, 3 ports for Ceph Corosync
I think you mixed up Ceph and Corosync a bit here. Corosync is the mechanism used for the Proxmox VE cluster with which it is deciding on changes throughout the cluster. Ceph is a clustered file/block level storage.

3. I heard 2 Node Cluster needs 3rd place to save corosync DB to avoid split brain. Is this mandatory?
A third vote is needed. Before installing a full on corosync on an RPi which is not officially supported by the tooling, have a look at the QDevice. This allows you to add another vote to the cluster: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_corosync_external_vote_support
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!