Dell md1420 sas HBA expansion - multi-path

noname

Active Member
May 14, 2014
21
0
41
Hello to all, anyone to knows how to configure properly sas dell md 1420 ?
for example, i have 3 disks sas dual port, and i see in proxmox disks not 3 but 6
it is possible to see only 3 ?
 
Hello @bbgeek17 and thanks for the reply,

i have made test lab and i am very close, the question is
server1, hba1, with disk 1,2,3,4,5,6,7,8 - connect to md1420 sas
server2, hba2, with disk 1,2,3,4,5,6,7,8 - connect to md1420 sas

the two server seen all disks total 8 to md 1420

can i use disks with ids to make ceph storage (osd) ?
for example 4 unique ids disks per server

of course when i am ready i will share 3 nodes and total 24 sas ssds (8 disks per node) , via 100gb private network full-mesh topology
 
Last edited:
i have made test lab and i am very close, the question is
server1, hba1, with disk 1,2,3,4,5,6,7,8 - connect to md1420 sas
server2, hba2, with disk 1,2,3,4,5,6,7,8 - connect to md1420 sas
Does this mean that you are using MD1420 as a dumb disk enclosure, just presenting physical disks to hosts without any RAID protection?

the two server seen all disks total 8 to md 1420
So no redundant pathing?

can i use disks with ids to make ceph storage (osd) ?
for example 4 unique ids disks per server
I am not a Ceph expert and in theory it should be possible. Of course there is nothing but good documentation that prevents someone from using a disk that is in use on another node, causing data corruption.
Additionally, the goal with Ceph is redundancy and ability to handle node failure. With two servers you will be in an unsupported Ceph setup. And, of course, an MD failure will cause entire data set inaccessibility. So I am not sure what the advantage of Ceph would be in this case. Keep in mind that you will be cross-replicating the data on the same array.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Does this mean that you are using MD1420 as a dumb disk enclosure, just presenting physical disks to hosts without any RAID protection?


So no redundant pathing?


I am not a Ceph expert and in theory it should be possible. Of course there is nothing but good documentation that prevents someone from using a disk that is in use on another node, causing data corruption.
Additionally, the goal with Ceph is redundancy and ability to handle node failure. With two servers you will be in an unsupported Ceph setup. And, of course, an MD failure will cause entire data set inaccessibility. So I am not sure what the advantage of Ceph would be in this case. Keep in mind that you will be cross-replicating the data on the same array.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
you very fast as flash gordon, thanks by the way, ;-)))

it is demo right now, if it work as i want, - of course i put second md 1420 for dual path, also md 1420 it has dual controller and build for this cases and of course minimun 3 nodes, and up.

as i said above, inside proxmox tab disks i see all disks, but is diffrent to add osds with unique ids per server. so every proxmox node talk to the specific disks ids.
for example
total 24 disks
8 disks per node, total node = 3
and share via 100gb private network full-mesh topology
and host network via 40gb
 
of course i put second md 1420 for dual path,
the dual path is for per-disk. In standard deployment you have a host with two SAS/FC/Network interfaces that can access the Jbod/SAN. If a path fails, Multipath takes care moving traffic over to another path. A second MD1420 would be a completely separate Jbod, afaik.

Your device only has two connections and supports only a single host:
Maximum number of servers 1
https://www.dell.com/support/manual...uid=guid-9368273d-2334-4ca0-8210-db5baab21bb3

Really sounds like you are trying to fit a square peg into a round hole here.

Good luck
https://www.experts-exchange.com/qu...torage-DELL-MD1420-with-3-ESXI-6-7-Hosts.html


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: noname

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!