Fiber san direct to proxmox

KungFuLemonade

New Member
Jun 17, 2021
3
0
1
36
Hi guys

total new guy here trying my best but.. well its not good enough :) i have a msa 2040 connected by fiber into my hp gen 9 dl380 and proxmox installed on the tin.

in proxmox i can see this

disk pic.PNG
i can see all 4 fiber connections. and i have installed muitipath

but i cant add it as a lvm disk as its saying there are no unused disks!

any help would be awesome cheers!
 
That's probably because all your MSA 2040 disks display "Usage ext4" instead of "Usage No" in the screenshot.

Could you please also post the following?
Code:
cat /etc/pve/storage.cfg
 
Thanks, I tried to map it as a directory, but i'm guessing that I should be mounting it as LVM?

Here's the output.

dir: local path /var/lib/vz content iso,backup,vztmpl lvmthin: local-lvm thinpool data vgname pve content rootdir,images lvm: VMStorage vgname VMStorage content rootdir,images nodes PVE01 shared 0 dir: san1 path /media/san1/


This is the fdisk
root@PVE01:~# fdisk -l /dev/mapper/mpath0 Disk /dev/mapper/mpath0: 112.8 TiB, 123999996608512 bytes, 242187493376 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 1048576 bytes
 
Last edited:
Yes, I would recommend exporting one big LUN and then set up LVM on top of that LUN (in the PVE GUI).

If you add the iSCSI storage using the GUI you have to pay attention to the "Use LUNs directly" checkbox.
In case you haven't seen it yet: There are helpful Wiki pages about iSCSI generally and multipath.
 
Thanks for the reply. ok so im looking at that but really struggling with this where would i find the iqn of the san? or is this the 4 fc ports
 
4 fc can't use iscsi, is like local disk. so use multipath chang it to a single disk ?
and any node use same config ?
 
Yes, multipath is highly recommended here, I would say required. The problem for original poster was that "somehow" an ext4 filesystem was created on the MSA LUN. That makes the disk "used".

The original poster needs to understand what data is on that LUN by mounting the /dev/mapper/mpath0 to a temporary location and checking the mount. If the filesystem is empty or not needed, then the partition needs to be remove so that LVM can utilize it.
There are many cheat-sheets available online on how to prep the disk with LVM:
https://techknowledgeworld.wordpress.com/2015/02/21/logical-volume-manager-lvm-cheat-sheet/

In theory the Proxmox GUI should allow provisioning the mpath0 device as well.

As for multiple nodes - one has to understand the implications of using LVM on _shared_ disk:
https://pve.proxmox.com/wiki/Storage

Specifically:
- Thin LVM is not supported on shared storage
- Thick/Regular is the only LVM variant supported for shared storage
- No snapshot support with Thick LVM

If all nodes in the cluster are configured with identical paths, have the same multipath configuration - they should also see the resulting disk. Just make sure you understand the above.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!