FC Multipath config now working ( At first VM creation multipath breaks )

brunolab

New Member
Mar 27, 2024
2
0
1
Hi people!

As it's a common thing right now, giving the broadcom decitions, my company it's looking to migrate to a more viable solution in economic terms.

I've been trying to set up a HP Fiber Channel Storage with multipath, and i'm really struggling here... (We use this type of storage in over 30 sites.)

There must be something that i'm doing wrong...

Theese are the steps that i've been following:

1. With the block disk, i create a partition ( end result: /dev/sdb1 )
2. At multipath config (Down bellow), i set the wwids in /etc/multipath/wwids and in /etc/multipath.conf.
3. pvcreate /dev/mapper/fc-01 ( Over the alias set previously in multipath.conf ).
4. vgcreate hp_storage_1 /dev/mapper/fc-01
5. Go to Proxmox GUI on Datacenter level --> Storage --> Add --> LVM --> Select VG Group hp_storage_1, and then i do the same for the second disk.

In the middle of this, i've been doing a reboot at every step, because i see under lsblk -f that every time i change something with multipath i doesn't show there until i reboot, so, if i'm ignoring something here, i prefer to reboot the entire server to test this and get it working.

At plain sight it seems to be all fine, UNTIL i install a VM, and reboot the server.

When i reboot the server, Multipath config BREAKS completely.

Getting something like this:

Code:
root@proxmox1:~# lsblk -f                                                                                         

NAME                  FSTYPE       FSVER    LABEL UUID                                   FSAVAIL FSUSE% MOUNTPOINTS

sda                                                                                                               

├─sda1                                                                                                           

├─sda2                vfat         FAT32          2CAB-AA06                                                       

└─sda3                LVM2_member  LVM2 001       7qGCjd-2c3R-4JEq-wL93-FHBA-EA0u-SN6Zv1                         

  ├─pve-swap          swap         1              1c27ba65-e9e7-43df-8dd9-d964fe2b2808                  [SWAP]   

  ├─pve-root          ext4         1.0            9c329945-3746-4f74-b5e6-fdb5776b5ea8       61G    16% /         

  ├─pve-data_tmeta                                                                                               

  │ └─pve-data                                                                                                   

  └─pve-data_tdata                                                                                               

    └─pve-data                                                                                                   

sdb                                                                                                               

└─sdb1                LVM2_member  LVM2 001       n2wk3L-Pa0Q-u3Wa-kyAD-HZ2e-0oO4-DFiCPv                         

  └─hp_storage_1-vm--100--disk--0                                                                                 

                                                                                                                  

sdc                   mpath_member                                                                               

└─fc-02                                                                                                           

  └─fc-02-part1       LVM2_member  LVM2 001       r5ddfB-JTxh-agFT-OHgg-8nHn-1KaY-4CN3O5                         

sdd                                                                                                               

└─sdd1                LVM2_member  LVM2 001       n2wk3L-Pa0Q-u3Wa-kyAD-HZ2e-0oO4-DFiCPv                         

sde                   mpath_member                                                                               

└─fc-02                                                                                                           

  └─fc-02-part1       LVM2_member  LVM2 001       r5ddfB-JTxh-agFT-OHgg-8nHn-1KaY-4CN3O5


----------
And before doing this, this is what it looks like:

Code:
root@proxmox2:~# lsblk -f                                                                                         

NAME               FSTYPE       FSVER    LABEL UUID                                   FSAVAIL FSUSE% MOUNTPOINTS 

sda                                                                                                               

├─sda1                                                                                                           

├─sda2             vfat         FAT32          3B58-6510                                                         

└─sda3             LVM2_member  LVM2 001       12BVdU-mYVw-TND2-h6qd-lYHp-A5QW-k8mMCm                             

  ├─pve-swap       swap         1              6c058b5e-af3d-4dbe-91ae-ac7f6bb3fcea                  [SWAP]       

  ├─pve-root       ext4         1.0            bdf4468e-5252-4727-a40b-1f303fb28ccb     70.5G     4% /           

  ├─pve-data_tmeta                                                                                               

  │ └─pve-data                                                                                                   

  └─pve-data_tdata                                                                                               

    └─pve-data                                                                                                   

sdb                mpath_member                                                                                   

└─fc-01                                                                                                           

  └─fc-01-part1    LVM2_member  LVM2 001       n2wk3L-Pa0Q-u3Wa-kyAD-HZ2e-0oO4-DFiCPv                             

sdc                mpath_member                                                                                   

└─fc-02                                                                                                           

  └─fc-02-part1    LVM2_member  LVM2 001       r5ddfB-JTxh-agFT-OHgg-8nHn-1KaY-4CN3O5                             

sdd                mpath_member                                                                                   

└─fc-01                                                                                                           

  └─fc-01-part1    LVM2_member  LVM2 001       n2wk3L-Pa0Q-u3Wa-kyAD-HZ2e-0oO4-DFiCPv                             

sde                mpath_member                                                                                   

└─fc-02                                                                                                           

  └─fc-02-part1    LVM2_member  LVM2 001       r5ddfB-JTxh-agFT-OHgg-8nHn-1KaY-4CN3O5

-----------------------

This is my multipath.conf:

Code:
defaults {                                                                                                       

    find_multipaths         yes                                                                                   

    polling_interval        2                                                                                     

    path_selector           "round-robin 0"                                                                       

    path_grouping_policy    multibus                                                                             

    uid_attribute           ID_SERIAL                                                                             

    rr_min_io               100                                                                                   

    failback                immediate                                                                             

    no_path_retry           queue                                                                                 

    user_friendly_names     yes                                                                                   

}                                                                                                                 

                                                                                                                  

blacklist {                                                                                                       

    wwid .*                                                                                                       

    devnode "^sda$"                                                                                               

}                                                                                                                 

                                                                                                                  

blacklist_exceptions {                                                                                           

    wwid "3600c0ff0001b9aaf7f0db36301000000"                                                                     

    wwid "3600c0ff0001bc46b4907b36301000000"                                                                     

}                                                                                                                 

                                                                                                                  

                                                                                                                  

multipaths {                                                                                                     

                                                                                                                  

  multipath {                                                                                                     

    wwid "3600c0ff0001b9aaf7f0db36301000000"                                                                     

    alias "fc-01"                                                                                                 

  }                                                                                                               

                                                                                                                  

  multipath {                                                                                                     

    wwid "3600c0ff0001bc46b4907b36301000000"                                                                     

    alias "fc-02"                                                                                                 

  }                                                                                                               

                                                                                                                  

}

This is my wwids file:

Code:
# Multipath wwids, Version : 1.0

# NOTE: This file is automatically maintained by multipath and multipathd.

# You should not need to edit this file in normal circumstances.

#

# Valid WWIDs:

/3600c0ff0001b9aaf7f0db36301000000/

/3600c0ff0001bc46b4907b36301000000/

I dont really know what to do.

I try a thousand things and i can't get it to work.

If anyone can bring some light over this, i'll be very grateful.

Cheers!
 
Last edited:
Pleae use CODE-tags for text, especially configuration files.

With the block disk, i create a partition ( end result: /dev/sdb1 )
Don't use partitions with LUNs. It's unneccessary and you probably will have block access missalignment.

This is my wwids file
I don't use the wwid file if you already configured your multipath device in /etc/multipath.conf

Please post the output (in code tags) of multipath -ll
 
Pleae use CODE-tags for text, especially configuration files.
Oh, I'm Sorry mate, i'm new to the forum, i edit the post to make it more readable using code tags. Thanks for the advice!

Don't use partitions with LUNs. It's unneccessary and you probably will have block access missalignment.
I try to generate a physical volume from the block device but i get all sorts of negative outputs... Like: disk it's partitioned, or, disk is part of multipath... i wonder if i have to totally uninstall multipath in order to generate first the physical volume and then the volume group...

I don't use the wwid file if you already configured your multipath device in /etc/multipath.conf
I did that because i read in proxmox iscsi config docs that it has to be done executing multipath -a <disk_id>

Please post the output (in code tags) of multipath -ll

Sure!

Pmox-Node-1: ( Already not working )

Code:
root@proxmox1:~# multipath -ll                                                                                                                                                                                                           
fc-02 (3600c0ff0001bc46b4907b36301000000) dm-6 HP,P2000G3 FC/iSCSI                             
size=1.1T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw                 
|-+- policy='round-robin 0' prio=50 status=active                                                   
| `- 3:0:0:2 sdc 8:32 active ready running                              
`-+- policy='round-robin 0' prio=10 status=enabled                 
  `- 4:0:0:2 sde 8:64 active ready running

Pmox-Node-2:

Code:
root@proxmox2:~# multipath -ll              
fc-01 (3600c0ff0001b9aaf7f0db36301000000) dm-5 HP,P2000G3 FC/iSCSI                   
size=1.1T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw       
|-+- policy='round-robin 0' prio=50 status=active             
| `- 4:0:0:1 sdd 8:48 active ready running                       
`-+- policy='round-robin 0' prio=10 status=enabled              
  `- 3:0:0:1 sdb 8:16 active ready running                    
fc-02 (3600c0ff0001bc46b4907b36301000000) dm-6 HP,P2000G3 FC/iSCSI          
size=1.1T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw            
|-+- policy='round-robin 0' prio=50 status=active                 
| `- 3:0:0:2 sdc 8:32 active ready running                        
`-+- policy='round-robin 0' prio=10 status=enabled                        
  `- 4:0:0:2 sde 8:64 active ready running

Thanks in advance!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!