Which partition for SAN with SAS disks

Adamced

New Member
Oct 14, 2024
9
4
3
Good morning everyone.
I have this problem, I installed a proxmox ve 8.2.7 on 3 HP D560 nodes and created a cluster without problems.
as dedicated disks for the VMs I have an HP P2000 G3 san with SAS connection. on this san i created two volumes in raid 6 one of 13 TB and the second of 8 TB, The proxmox nodes see them as sas disks (I don't know why double), however I come to the question. Can I use ZFS for these two disks? to create two storage to store the VM and share with both nodes?
 

Attachments

  • san disk sas.JPG
    san disk sas.JPG
    32.2 KB · Views: 9
  • pve.JPG
    pve.JPG
    104.1 KB · Views: 9
  • sas.JPG
    sas.JPG
    83.6 KB · Views: 9
Hi @Adamced , welcome to the forum.

You've attached shared SAS storage to multiple hosts.
Your storage is connected to each host via multiple paths for redundancy.
You are seeing "double" because each path provides an access to the LUN.
Your next step is to install and configure "multipath" package on each of your hosts https://pve.proxmox.com/wiki/ISCSI_Multipath (ignore the iSCSI part. Improvements are coming to the Wiki).
Once you properly set up multipath, you need to choose a file system or volume manager that is suitable for shared storage.
ZFS is NOT appropriate in this case. It is NOT a shared file system or cluster aware.
Your choice is either LVM Thick https://pve.proxmox.com/wiki/Storage:_LVM OR Cluster Aware Filesystem.
Proxmox does not come with built-in CAF. You need to research, install, configure and support one on your own.

There are many guides, articles and forum posts that revolve around such configuration. You should be able to find something that will get you started.

Good luck
https://pve.proxmox.com/wiki/Storage


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S
Thanks for the answer, following your advice I was able to solve it with multipath and sas in LVM.
Do you have any advice for me to do the same thing with LVM-THIN partitions since these allow clone and snapshot?
I tried to follow this link LVM Thick https://pve.proxmox.com/wiki/Storage but when I restart a pve the disk no longer works
thanks
 
LVM thin on shared storages is not supported and propably will never be due to limitations of lvm thin. I'm not a developer though maybe one of the proxmox staff will correct me. I would be happy to be wrong on this ;)
Maybe one of the suggested alternatives on https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Alternatives_to_Snapshots is an suitable option for your usecase.
I myself would propably use PBS to achieve a snapshot/clone like function.
HTH and best regards, Johannes.
 
Last edited:
Ok thanks, I'll follow your advice.
After all, I've tried all the ways of using lvm-thin that I know of

thanks so much :)
 
  • Like
Reactions: Johannes S
Good morning,

Thanks to this post and your answers I solved my problem with the three cluster nodes and my SAN HP P2000 G3.
Now I would like to ask you another question.
As I said I have 3 cluster nodes on a network with vlan 100 that has these addresses 10.100.1.1, 2 and 3.
I would like to dedicate this network to the server gui and I would like to configure 6 new NICs for a network 10.90.1.1, 2 and 3 with vlan 90. On this network I would like to move the cluster nodes with the related Coresync is it possible?
 
On this network I would like to move the cluster nodes with the related Coresync is it possible?
Hi @Adamced, yes, its possible.
Changing networks is somewhat involved. I'd recommended reading on forum about pitfalls, making a good backup, or even building a virtual cluster and ironing out the procedure there first.

good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Thanks to this post and your answers I solved my problem with the three cluster nodes and my SAN HP P2000 G3.
What was your solution which you reached and how does it work (2 different sized vol's attached to 3 nodes) ?
Btw. you don't have to be forced to set up a cluster fs to use a msa as that storage just show disks to attached hosts as were they local installed and msa volumes could be shown to all attached hosts or even just one vol/node or any between.
 
  • Like
Reactions: Johannes S
Hi waltar,
I solved it this way, I have 3 nodes in cluster, yes it's true I'm widowing my two LUNs created with the SAN, one 8TB and one 13TB.
I installed multipath on all hosts following this guide
https://pve.proxmox.com/wiki/ISCSI_Multipath (ignore the iSCSI part. Improvements are coming to the Wiki).
I then created the LVM volumes this way:

dd if=/dev/zero of=/dev/sdc bs=512 count=1 conv=notrunc
dd if=/dev/zero of=/dev/sdd bs=512 count=1 conv=notrunc

dd if=/dev/zero of=/dev/mapper/mpathb bs=512 count=1 conv=notrunc
dd if=/dev/zero of=/dev/mapper/mpathc bs=512 count=1 conv=notrunc

kpartx -a /dev/mapper/mpathb
kpartx -a /dev/mapper/mpathc

fdisk /dev/mapper/mpathb
fdisk /dev/mapper/mpathc

g
n
p
Enter
Enter
Enter
t
8e
wq

pvcreate /dev/mapper/mpathb-part1
pvcreate /dev/mapper/mpathc-part1

pvscan

vgcreate LVM-STORAGE /dev/mapper/mpathb-part1 /dev/mapper/mpathc-part1

Edit /etc/lvm/lvm.conf

I added
# PVE (match in following order)
# - accept /dev/mapper devices (multipath devices are here)
# - accept /dev/sda device (created by PROXMOX installed on EXT4)
# - reject all other devices
filter = [ "a|/dev/mapper/|", "r|.*|" ]

I shared LVM-STORAGE

in the 3 nodes I have an extra disk configured in backup mode to simulate snapshots since LVM does not allow this function.
However I can migrate from one host to another and clone VMs
 
Hi
I also solved the problem of reassigning corosync to two new IP addresses without losing data.

I did it this way:
I first created the respective vlan on the two switches assigning the necessary ports for.
in the three nodes I created a bound per node where I assigned the dedicated network to the cluster 10.90. 133.91/24 node 1 and so on for the other two.
from node 1 I modified the file /etc/pve/corosync.conf removing the IPs that I had on the previous cards and assigning the new ones.
After saving I restarted the nodes one at a time "waiting for the server in question to restart by checking it from ping".
I started a new browser window and logged in to node 1, then I restarted node two and subsequently node three now everything works
 

Attachments

  • Cluster.JPG
    Cluster.JPG
    81.3 KB · Views: 1
  • HA.JPG
    HA.JPG
    61.8 KB · Views: 1
  • Like
Reactions: Johannes S
hello everyone,
I have another question for you if you can help me?
could you give me a guide to activate the spice mode on my vm?
I would like in this way to allow users who use a terminal to manage the audio and the possible onboard webcam
thanks
 
Hi everyone, I installed proxmox backup.
Is there an effective guide that can show me the best way to perform backup and retention?
Thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!