FC Storage challenges

Here my Config from my DataCore Storage Test-Environment:
Code:
defaults { polling_interval    60 }

blacklist_exceptions {
    vendor    "DataCore"
    product "Virtual Disk"
}


device {
    vendor "DataCore"
    product "Virtual Disk"
    failback 10
    path_checker tur
    prio alua
    no_path_retry fail
    # dev_loss_tmo infinity
    dev_loss_tmo 60
    fast_io_fail_tmo 5
    rr_min_io_rq 100
    # rr_min_io 100
    path_grouping_policy group_by_prio
    # path_grouping_policy failover
    # user_friendly_names yes
}

Optionally, you can add the paths at any time with multipath -a /dev/sdx.
 
Last edited:
If I am using wildcard config, will I be able to add new LUN's without edit the multipath file?
Anywone that have an example file to show me / educate me?
I always name my luns with an alias, yet it would work without and you can just the mpath device that is created without an entry. This also depends on your other settings if you e.g. blacklist all non-explicitly named devices, which I never do. I blacklist controllers so that it just adds the luns automatically.

This is a long way to go coming from vcenter / esxi, but its free :D
VMware is for me the long way with its 99% linux copy clone with additional stuff.
 
I'm having simliar struggles, also a 3PAR using FC. setup some volumes and all was ok. but I then increased the size of one on the 3par from 1.5 to 2 TB but I cannot get it to update on pve..have rescanned and restarted the multipath tools...

the last one in the list here should be 2TB now..

what am I missing.. coming from VMware, this is a bit of a shock. I've played around with xcp-ngas well and it was able to do all this without any issue, through the gui, in fact as soon as you increase the vol on the 3par xcp sees the change and expands the storage.. if only Veeam added xcp-ng support :confused:

root@pve-b2-010:~# multipath -ll
xxx-PVE-LVM1 (360002ac0000000000000c4900001b798) dm-28 3PARdata,VV
size=1.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 3:0:0:0 sdc 8:32 active ready running
`- 3:0:1:0 sdf 8:80 active ready running
xxx-PVE-LVM2 (360002ac0000000000000c4aa0001b798) dm-34 3PARdata,VV
size=2.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 3:0:0:2 sdj 8:144 active ready running
`- 3:0:1:2 sdi 8:128 active ready running
xxx-PVE-LVM1 (360002ac0000000000000c4940001b798) dm-29 3PARdata,VV
size=1.5T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 3:0:0:1 sdd 8:48 active ready running
`- 3:0:1:1 sdg 8:96 active ready running
root@pve-b2-010:~#



Any help would be appreciated.. thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!