Fujitsu RX300 S7 with FC storage Eternus DX90 S2

krupecek

New Member
Nov 19, 2022
8
0
1
Hello,
I'm a beginner in proxmox so maybe I'm doing something wrong or misunderstood.
I have a problem with 2xnodes Fujitsu RX300 S7 with connected 2x Fibre Channel to each node by Emulex Lightpulse to storage Eternus DX90 S2.
I want to do HA cluster - I have on other machine - Dell R620 installed Proxmox - this Dell will be working like a master of this 2xnodes Fujitsu.
In Proxmox on 2xnodes Fujitsu i can see /dev/sdb, /dev/sdc, /dev/sdd , /dev/sde - each connection of Fibre Channel to nodes but i cant add LVM in Proxmox Web Config.
I have in Eternus Dx90 s2 created 2x LUN and i dont know which Host Response need to use(screen).
 

Attachments

  • host_response.PNG
    host_response.PNG
    34.9 KB · Views: 24
You need to create a physical volume on the disks before and then create on top the volume group like it is described here. Although, I'd skip the partition creation, this can lead to read/write amplification if the block size do not match, so in your case:

Code:
pvcreate /dev/mapper/mpatha
pvcreate /dev/mapper/mpathb

then create the diskgroup as described in the linked article.
 
  • Like
Reactions: krupecek
Hello all.

I'm currently evaluating various hypervisors as an alternative to vmware. At the moment, the SAN/FC connection to Proxmox is stopping me from putting PM VE on the favorites list. I would have no problem familiarizing myself with configuring multipath on Debian. However, it should be ensured that these adjustments are retained and functional after an upgrade. Is there any official documentation from Proxmox on this topic? Or is an integration into Proxmox planned? Thanks!

ich evaluiere gerade verschiedene Hypervisoren als Alternative zu VMware. Im Moment hält mich der Punkt SAN/FC Anbindung an Proxmox noch davon ab PM VE auf die Favoritenliste zu setzen. Ich hätte kein Problem mich in die Konfiguration von Multipath auf Debian einzuarbeiten. Allerdings sollte sicher gestellt sein, dass diese Anpassungen nach einem Upgrade auch erhalten und funktionstüchtig bleiben. Gibt es irgendeine offizielle Dokumentation seitens Proxmox zu diesem Thema? Oder ist eine Integration in Proxmox geplant? Danke!

PS: Wir haben hier Fujitsu Eternus DX und Nimble Storages im Einsatz. Alles mit FC-Anbindung.
 
Is there any official documentation from Proxmox on this topic?
Any documentation from RHEL, Debian and storage vendors is fine. Multipath is not that hard and there are no choices with respect to the used storage technology. FC-based SAN in a supported PVE cluster has to be thick LVM.


Or is an integration into Proxmox planned?
No. Proxmox stressed multiple times, that FC-based SAN is a dying technology in the realy of hypervisors and there are not may posts on the forum, so maybe it's just very straight-forward or no one uses it.


PS: Wir haben hier Fujitsu Eternus DX und Nimble Storages im Einsatz. Alles mit FC-Anbindung.
We're running PVE for years sucessfully on DX100 and afterwards on DX200. We're running FC-based multipathing on Linux on various distributions for almost two decades and it has not changed much.
 
Any documentation from RHEL, Debian and storage vendors is fine. Multipath is not that hard and there are no choices with respect to the used storage technology. FC-based SAN in a supported PVE cluster has to be thick LVM.



No. Proxmox stressed multiple times, that FC-based SAN is a dying technology in the realy of hypervisors and there are not may posts on the forum, so maybe it's just very straight-forward or no one uses it.



We're running PVE for years sucessfully on DX100 and afterwards on DX200. We're running FC-based multipathing on Linux on various distributions for almost two decades and it has not changed much.
Thanks LnxBil! I will give it a try in a test environment. Thick LVM is another topic to invest.
 
Thick LVM is another topic to invest
Its a Volume Manager that thickly allocates space on volume create and does not maintain metadata that can be used for snapshots or clones. Its a way to allow multi-write disk access for multiple hosts without them stepping on each other. Its not the best way, but it works for your purpose.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!