Multipath and raw devices

pascal-shom

New Member
Sep 9, 2020
6
0
1
64
Hi
I'm newbee with Proxmox......so :

Actualy I use Hypervisor Xen/toolstack xl on Debian Buster. I have 300 VMs on 10 Hypervisors.

For each VM I have one LUN as system disk on a SanSymphony Datacore acceded through ISCSI and use Multipathing

tcp: [1] 192.168.158.200:3260,1 iqn.2000-08.com.datacore:ssy2-fei4 (non-flash)
tcp: [2] 192.168.158.100:3260,1 iqn.2000-08.com.datacore:ssy1-fei4 (non-flash)
tcp: [3] 192.168.155.100:3260,1 iqn.2000-08.com.datacore:ssy1-fei1 (non-flash)
tcp: [4] 192.168.155.200:3260,1 iqn.2000-08.com.datacore:ssy2-fei1 (non-flash)
tcp: [5] 192.168.157.100:3260,1 iqn.2000-08.com.datacore:ssy1-fei3 (non-flash)
tcp: [6] 192.168.156.100:3260,1 iqn.2000-08.com.datacore:ssy1-fei2 (non-flash)
tcp: [7] 192.168.156.200:3260,1 iqn.2000-08.com.datacore:ssy2-fei2 (non-flash)
tcp: [8] 192.168.157.200:3260,1 iqn.2000-08.com.datacore:ssy2-fei3 (non-flash)

Example:
sl-pascal-int-root (360030d9058cf6f0512d8fdcb49eba550) dm-2 DataCore,sl-pascal-IR
size=16G features='0' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 7:0:0:227 sdl 8:176 active ready running
| `- 4:0:0:227 sdae 65:224 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 3:0:0:227 sdf 8:80 active ready running
`- 10:0:0:227 sdaj 66:48 active ready running

lsblk /dev/mapper/sl-pascal-int-root
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sl-pascal-int-root 253:2 0 16G 0 mpath
|-sl-pascal-int-root-part1 253:4 0 128M 0 part
|-sl-pascal-int-root-part2 253:5 0 1K 0 part
|-sl-pascal-int-root-part5 253:6 0 6G 0 part
|-sl-pascal-int-root-part6 253:7 0 2G 0 part
|-sl-pascal-int-root-part7 253:8 0 2G 0 part
|-sl-pascal-int-root-part8 253:9 0 1G 0 part
`-sl-pascal-int-root-part9 253:10 0 4.9G 0 part

So, each LUN is a raw device

I install PVE 6.2-11, but how tell Proxmox when create a VM to use /dev/mapper/sl-pascal-int-root as a hard disk ?

Thanks
Pascal
 
This would be a lun pass through configuration from my perspective, which is a manual configuration following these steps:
https://pve.proxmox.com/wiki/Physical_disk_to_kvm

Not sure if I would want to go down that route for a large set of machines...

You will have a ton of LUN ids to manage and this also results in a lot of paths on the Datacore end which can lead to some side effects according to my experience.

What is the reason behind this approach (on a broader scale)?
 
Thank's for rapid response !

The 1st reason was performance, we have a good experience with this method.
Snapshot and growing filesytem are managed by Datacore, our backup tool use directly /dev/mapper/xxx in read only

For migrate to Proxmox: if I can reuse my LUNs in the first step , it's more funny because we have also BIG filesystem (64To) to attach to our VM

So I'm trying https://pve.proxmox.com/wiki/Physical_disk_to_kvm

thank's again
 
Re
Use LUN /dev/mapper/sl-pascal-int-root is OK !
But When I want to migrate : can't migrate local disk '/dev/mapper/sl-pascal-int-root': local file/device
It's not a local device but a ISCSI device !?
 
Scaling via luns is favourable for the datacore architecture. You for instance get a separate caching process per lun which speeds things up quite well. Many luns can have their downsides though (make sure you have enough memory in the datacore nodes, otherwise you will be in a place where you don't want to be...)
Big filesystem's on one lun are great for management, but will lack the scaling of multiple luns which I just mentioned.

Putting all this OT aside: seems the /dev/mapper reference is not recognised as a shared device / does not exist on the other side. Have you checked it is available on the other host(s)?
Maybe this is no viable option as well. In which case someone of the proxmox team should jump in to clarify if I have sent you on the right track. :rolleyes:
 
My DataCore servers have sufficient memory
Yes all hosts can use the /dev/mapper, they are all connected to datacore. So I consider that all devices are shared.
This method is ok with 10 Hypervisors Xen/toolstack xl on Debian Buster (migration offline/online)
Thx
 
I don't see this fly out of the box.

It would be great to use a storage plugin for that and it was suggested on the developer mailing list a while ago, but there were no patches available, so it was not and will not be investigated further.
 
Arg....!
One ISCSI connection supported and 2 thru multipath not ! In a HA architecture it's damaging

The problem is that you need a connection per VM in your setup. If you would use one per storage pool and LVM on top, you would have no problem with multipath iSCSI, it would just works as good as its superior counterpart FC.
 
I miss something....

If I dedicated a big LUN ISCSI as storage pool and LVM on top,

ex : /dev/mapper/test-data
test-data (360030d909d02320758dad395ca947f29) dm-0 DataCore,test-D
size=4.0T features='0' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 8:0:0:188 sdk 8:160 active ready running
| `- 5:0:0:188 sdac 65:192 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 3:0:0:188 sde 8:64 active ready running
`- 12:0:0:188 sdaj 66:48 active ready running

Each hypervisor can access to this big LUN thru /dev/mapper/test-data

You mean I can create a VG with phis PV, and declare it like a LVM storage in Datacenter ?
 
You mean I can create a VG with phis PV, and declare it like a LVM storage in Datacenter ?

Yes. That is the default way to do this. Unfortunately, you will not have snapshots. If your backend supports thin-provisioning and the iSCSI-backend sees it, you will also have potential thin provisioning on the LVM side (not LVM-thin, but LVM-thick with TRIM-Support).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!