2 Node with Msa

zav0k

New Member
May 29, 2024
3
0
1
Hello everyone, I have a client with the following setup:

2 node:
HPE ProLiant DL380 Gen11 Intel Xeon-G 6426Y 16-Core (2.50GHz 37.50MB)
128GB (4 x 32GB)
PC5-4800B RDIMM 8 x Hot Plug 2.5in Small Form Factor x1 Tri-Mode Basic Carrier MR408i-o
HPE 240GB SATA 6G Read Intensive SFF (2.5in) Basic Carrier Multi Vendor SSD
HPE Ethernet 10Gb 2-port SFP+ BCM57412 OCP3 Adapter
HPE E208e-p SR Gen10 12Gb 2-ports External SAS Controller

1 HPE MSA 2062 12Gb SAS SFF Storage:
HPE MSA 11.5TB SAS 12G Read Intensive SFF (2.5in) M2 3-year Warranty 6-pack SSD Bundle
HPE 2.0m External Mini SAS High Density (HD) to Mini SAS Cable

The virtual machines involved are three:
  • Windows Server 2025 AD + File Serve
  • Windows Server 2025 SQL + management application
  • Windows Server 2025 Remote Desktop
These are VMs without any particular or special configurations.

Goal: migrate everything to Proxmox while using the SAN as shared storage.

I’d like to know if anyone on the forum has already had experience with a similar configuration and what the potential critical issues might be with this type of setup without using Ceph.

In addition, I had looked on the official site for a wiki to configure this scenario, and I found this:
https://pve.proxmox.com/wiki/Two-Node_High_Availability_Cluster

Is there anything more up-to-date?

Thanks in advance to anyone who replies.
 
Hi @zav0k ,
The configuration you describe is pretty common. It works well when configured properly. There are many resources online that can help with your configuration, but nothing replaces hands-on work and deep understanding of each step.
We wrote an article that may be helpful with high level concepts. It is geared toward iSCSI, however the shared-storage concepts are the same for SAS.
https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/

I would not use that particular WIKI article for your situation. The links below are more appropriate:

https://pve.proxmox.com/wiki/High_Availability
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pvecm
https://pve.proxmox.com/wiki/Cluster_Manager#:~:text=For smaller 2-node clusters, the QDevice can be used to provide a 3rd vote.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: leesteken
Great guide so far.

One last question:
let’s say I purchase a Proxmox VE Standard Subscription (1 CPU/year) and I have a client in this type of scenario.
I need to replace a node, or I have to perform a major version upgrade, or in general I face an unexpected event where something goes wrong for any reason.

Since this is a non-standard scenario (because, as I understand it, Proxmox tends to favor hyperconverged setups with 3 nodes), can Proxmox support still assist me in these cases?
 
This is no different than a Microsoft with MSA. They will provide standard support to the best of their abilities for their product and it's functionality. If the issue points to storage they will point you to your storage vendor.
SAN is a supported configuration with PVE
Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
The best is to use ceph raid 5 with parity node.
When async storage is accepted use local zfs with replication. Not supported would be GFS2.1
 
The best is to use ceph raid 5 with parity node.
When async storage is accepted use local zfs with replication. Not supported would be GFS2.1
Op's environment is two-host cluster with MSA SAN. Neither Ceph nor ZFS would be a technological fit here...


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S