[Improvment] Multipath configuration shared

dominique.fournier

Active Member
Jul 25, 2012
26
0
41
Hi
We use a ISCSI Compellent SAN which use Multipath. it works very well but the creating of a new VM is a manual step by step process. One of the steps is the multipath.conf modification to add the real name of the VM to the wwid. It is shared by all the proxmox nodes.
I have two questions :
- Is PVE will supports natively the multipath (create the name in multipath, reload the service, inform all the nodes) ?
- If not, is this possible to share the multipath.conf file in /etc/pve and request the daemon to be reloaded automatically each time the file is write ?
Thanks !
 
So you create for each VM a new LUN and distribute it?

If so, that is one way to go, another would be to use just LVM on top of the multipath devices. This is the common way to go.
 
There is 4 LUN for one VM, as there is 4 pathes to connect the SAN to the Proxmox. And there is LVM on top of each multipath entry to supports backup. Each time I add a new VM, I need to modify the multipath.conf manually and push it on all the cluster servers.
Do you have a better solution ?
Code:
cache-cml (36000d310012b2e00000000000000001b) dm-29 COMPELNT,Compellent Vol
size=37G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  |- 10:0:0:4  sdbo 68:32   active ready running
  |- 7:0:0:4   sdbb 67:80   active ready running
  |- 8:0:0:4   sdbi 67:192  active ready running
  `- 9:0:0:4   sdbl 67:240  active ready running

Configured from multipath.conf file
Code:
multipath {
                wwid                    36000d310012b2e00000000000000001b
                alias                   cache-cml
        }

The LVM is :
Code:
# pvs|grep cache
  /dev/mapper/cache-cml          cache-vg          lvm2 a--    <37.00g   <5.00g
# vgs|grep cache
  cache-vg            1   1   0 wz--n-   <37.00g   <5.00g
# lvs|grep cache
  vm-119-disk-1 cache-vg          -wi-a-----   32.00g
Thx
 
I ran into this 'manual' adding to multipath.conf issue. I'm using the https://github.com/TheGrandWazoo/freenas-proxmox

every time I create a machine it'll create a new ISCSI extend on my TrueNAS side and I had to manually add it, screw that I thought.

However, I stumbled across a setting (No clue where I found it had many tabs open) and it will find paths, im using this in a home lab situation I have no clue if/how it would act in production.

all I have in my /etc/multipath.conf is


Code:
defaults {
    user_friendly_names yes
    find_multipaths yes
    path_grouping_policy multibus
    path_selector "round-robin 0"
    prio const
    failback immediate
    no_path_retry 10
    checker_timeout 10
    path_checker readsector0
    polling_interval 10
    rr_min_io 500
    rr_min_io_rq 10
}

the magic setting was "find_multipaths yes" now everytime I create a new VM it will find the paths and auto add it to the /etc/multipath/wwid and /etc/multipath/bindings

EDIT: Added rr_min_io and rr_min_io_rq settings - Note this is for my setup running 10gb backbone to my storage. if you want more info on the settings check https://access.redhat.com/documenta...nux/7/html/dm_multipath/config_file_multipath Im testing different combinations. Defaults are rr_min_io 1000 and rr_min_io_rq 1

EDIT2: I changed rr_min_io and rr_min_io_rq back to defaults, not sure if it was related but I have had a few VMS loose their boot drives afterwards and had to restore from backups. lets hope its not the introduction of multipath.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!