How to create a LUN-backed VM via public REST API?

dxun

New Member
Sep 18, 2022
8
0
1
I've been trying to use one of the Proxmox Terraform providers to provision a VM image backed by a direct iSCSI LUN but was not successful. The provider is currently unable to do this so I started peeking at the REST API on how this might be done.

Perhaps I am not looking in the right places but I can't find a straightforward way of doing this. What UI is doing is basically swapping out the Disk Size combo for Disk Image upon selecting an iSCSI storage (see screenshots).

1709483521671.png
1709483565116.png

I've summarised my investigation here - curious if there is way do this via REST API?

EDIT: I started experimenting with possible workarounds but what I've seen doesn't look great. For example, trying to clone a VM template (cloud-init) from a local storage to an iSCSI storage seems like a dead-end from UI as well. You can select the new target storage but how does one select a LUN?

1709484488107.png
 
Last edited:
GUI works by using API to the backend. You can examine the calls it makes via Developer Tools in your browser.
It is not possible to "allocate" a LUN on iSCSI device, so the LUN must be pre-created, pre-zoned and available to PVE in the output equivalent to "pvesm list [storage]".
Then you need to use API equivalent to "qm set [vmid] --scsix [options]. You can find options in "man qm", API explorer for PVE, or reverse engineer them from "qm config [vmid]" of a VM you configured via GUI or CLI.
How any of it translates to Terraform interface I dont know.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
It is not possible to "allocate" a LUN on iSCSI device, so the LUN must be pre-created, pre-zoned and available to PVE in the output equivalent to "pvesm list [storage]".

Thank you, this is good info. The LUN is available to PVE (as seen in screenshot). But how can one use it during cloning (see edit above)? The UI doesn't allow selecting the LUN even though the LUN is available to PVE?

1709485285701.png
 
Thank you, this is good info. The LUN is available to PVE (as seen in screenshot). But how can one use it during cloning (see edit above)? The UI doesn't allow selecting the LUN even though the LUN is available to PVE?
You can't. PVE Clone operation inherently requires virtual disk allocation, and its not possible with direct (or indirect) iSCSI-only based storage.

You need storage that integrates with PVE via PVE-aware plugin that allows such metadata operations as : create, delete, expand, etc.

In summary, you will need different type of storage to achieve what you want.


You can examine available functions like so:
root@pve7test1:/usr/share/perl5/PVE/Storage# grep -i ^sub ISCSI*

For example:
ISCSIPlugin.pm:sub clone_image {

sub clone_image {
my ($class, $scfg, $storeid, $volname, $vmid, $snap) = @_;

die "can't clone images in iscsi storage\n";
}



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hmm - I don't know these internals all that well but let me see if I understand your point correctly: you're basically saying cloning images _to_ an iSCSI-based storage can't be done as it currently isn't supported? Is it a physical limitation (as in: it can never be done) or a Proxmox limitation (as in: work is needed in Proxmox to start supporting this)?

The part that confuses me is this:

You need storage that integrates with PVE via PVE-aware plugin that allows such metadata operations as : create, delete, expand, etc.

In summary, you will need different type of storage to achieve what you want.

Which type storage would that be? Where and how would we get (and/or write) the plugin that you mention?
 
Last edited:
On further thought - are we talking about a native Proxmox plugin on the _storage_ side of things as, for example, VAAI for VmWare?
 
you're basically saying cloning images _to_ an iSCSI-based storage can't be done as it currently isn't supported
yes. You can't create, delete, clone, snapshot "disk images" that are using direct iSCSI storage.
Is it a physical limitation (as in: it can never be done) or a Proxmox limitation (as in: work is needed in Proxmox to start supporting this)?
Proxmox is software, iSCSI is software - anything can be done in software with enough dedication and budget.

Which type storage would that be?
The list of out of the box supported storage types is here: https://pve.proxmox.com/wiki/Storage
Excluding iSCSI and PBS, all others support more "advanced" metadata operations. A slightly updated table can be located here: https://kb.blockbridge.com/guide/proxmox/#formats--content-types

Where and how would we get (and/or write) the plugin that you mention?
You can examine existing Plugins here: https://github.com/proxmox/pve-storage/tree/master/src/PVE/Storage
The guide to write code suitable for PVE can be found here: https://pve.proxmox.com/wiki/Perl_Style_Guide

Depending on the storage vendor that provides your iSCSI storage, there may be something out there already, ie TrueNAS. If there is nothing, then you'd need to create, test and maintain one on your own.

On further thought - are we talking about a native Proxmox plugin on the _storage_ side of things as, for example, VAAI for VmWare?
No, the PVE Storage Plugin is placed on PVE side. Its not a protocol extension like VAAI. Its more of PVE API to Storage API bridge.
iSCSI is a protocol for data transfer. As you can imagine, 10 iSCSI vendors will have 10 different ways of slicing and dicing their internal pools to present the final disk as iSCSI. And some of those vendors may have 3 different ways across various product lines of theirs.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: dxun
Thank you for your explanation and patience - this helps to clear out some of the things, but not all of them. Hope you don't mind a few more questions.

yes. You can't create, delete, clone, snapshot "disk images" that are using direct iSCSI storage.

This is the most confusing part - what is different around this process when Proxomox is creating a new VM with an iSCSI-based storage? Why can't existing clone process take the whole available space as default and then materialise the "disk image" on top of it?

The list of out of the box supported storage types is here: https://pve.proxmox.com/wiki/Storage
Excluding iSCSI and PBS, all others support more "advanced" metadata operations. A slightly updated table can be located here: https://kb.blockbridge.com/guide/proxmox/#formats--content-types

I see, thank you. So based on this (and given my storage backend, i.e. TrueNAS), my next best storage option for cloning cloud-init images would be ZFS over iSCSI? Unfortunately, it doesn't seem to support mpio, which is a big part of the reason why I chose iSCSI.

As a summary, would it be accurate to say that, at the moment, the choice here is:
- iSCSI => deploy VMs manually, without cloud-init, but with MPIO
- ZFS over iSCSI => clone VMs from cloud-init image, but lose MPIO ?
 
what is different around this process when Proxomox is creating a new VM with an iSCSI-based storage?
When you create a new VM and feed it iSCSI LUN during _vm creation time_ the only option you have to place OS on it is via booting via ISO. You cant "importdisk" or use similar methods.
Why can't existing clone process take the whole available space as default and then materialise the "disk image" on top of it?
The short answer - nobody wrote code to do so.

As a summary, would it be accurate to say that, at the moment, the choice here is:
- iSCSI => deploy VMs manually, without cloud-init, but with MPIO
- ZFS over iSCSI => clone VMs from cloud-init image, but lose MPIO ?
At high level, those are your choices. At more advanced level - you can still use option (a) if you forgo PVE GUI and CLI, and manually write the necessary image directly to raw disk (ie use "dd" for raw files, or "qemu-img" for qcow)


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Kingneutron
Why can't existing clone process take the whole available space as default and then materialise the "disk image" on top of it?
The short answer - nobody wrote code to do so.

I must admit I find that a bit surprising. Please correct me if I am wrong, but I would expect leveraging iSCSI storage would be more of an enterprise feature that would be very well supported. This seeems like a very basic gap - for businesses running at any larger scale and using SANs, deploying larger fleets of VMs is basically either a non-trivial lift for a custom script job or just isn't feasible. Imagine deploying a fleet of 100 VMs like this? Without Terraform, I can't imagine this. As you mentioned earlier, anything can be done in software with enough dedication and budget, but why go the hard and expensive route when most of the problems are already solved with (Open)TF.

As a summary, would it be accurate to say that, at the moment, the choice here is:
- iSCSI => deploy VMs manually, without cloud-init, but with MPIO
- ZFS over iSCSI => clone VMs from cloud-init image, but lose MPIO ?
At high level, those are your choices. At more advanced level - you can still use option (a) if you forgo PVE GUI and CLI, and manually write the necessary image directly to raw disk (ie use "dd" for raw files, or "qemu-img" for qcow)

Ah, this is very interesting - how would this work on a high level? Apologies for the pedestrian questions but this is an area that I am not too familiar with.

Would I start from a cloud-init template such as the one below and then use "qemu-img" to write the image from the source drive to the destination iSCSI LUN manually and _then_ clone the template? I don't see that working as I'd somehow have to define a new LUN for each clone but the UI doesn't allow me to change the hard drive type after the VM is cloned.
 
Here's what I have so far:

Code:
root@delta-vm:~# pvesm list TrueNAS
Volid                                                Format  Type             Size VMID
TrueNAS:0.0.0.scsi-36589cfc000000f18d78a50cff1dec18b raw     images    48318386176

root@delta-vm:~# ls /dev/disk/by-path/
ip-10.0.50.1:3260-iscsi-iqn.2005-10.org.freenas.ctl:delta-proxmox-target-lun-0  pci-0000:00:1f.2-ata-5.0-part1  pci-0000:00:1f.2-ata-5-part2
ip-10.0.60.1:3260-iscsi-iqn.2005-10.org.freenas.ctl:delta-proxmox-target-lun-0  pci-0000:00:1f.2-ata-5.0-part2  pci-0000:00:1f.2-ata-5-part3
pci-0000:00:1f.2-ata-5                                                          pci-0000:00:1f.2-ata-5.0-part3  pci-0000:00:1f.2-ata-6
pci-0000:00:1f.2-ata-5.0

We have the iSCSI volume and the iSCSI LUN is clearly visible.

I initially tried this (based off of docs here):
Code:
root@delta-vm:~# qemu-img convert -O raw /var/lib/vz/template/iso/Rocky-9-GenericCloud-Base.latest.x86_64.qcow2.img iscsi://10.0.50.1:3260/iscsi-iqn.2005-10.org.freenas.ctl/delta-proxmox-target-lun-0
qemu-img: iscsi://10.0.50.1:3260/iscsi-iqn.2005-10.org.freenas.ctl/delta-proxmox-target-lun-0: error while converting raw: Protocol driver 'iscsi' does not support image creation, and opening the image failed: Failed to parse URL : iscsi://10.0.50.1:3260/iscsi-iqn.2005-10.org.freenas.ctl/delta-proxmox-target-lun-0

This is a confusing error message to me - it isn't clear whether the protocol itself can't create the image or is the URL provided wrong.

I tried adding size argument to the `qemu-img` directive and it looks like the URL is somehow indeed wrong:

Code:
root@delta-vm:~# qemu-img convert -O raw /var/lib/vz/template/iso/Rocky-9-GenericCloud-Base.latest.x86_64.qcow2.img iscsi://10.0.50.1:3260/iscsi-iqn.2005-10.org.freenas.ctl/delta-proxmox-target-lun-0 10G
qemu-img: Could not open 'iscsi://10.0.50.1:3260/iscsi-iqn.2005-10.org.freenas.ctl/delta-proxmox-target-lun-0': Failed to parse URL : iscsi://10.0.50.1:3260/iscsi-iqn.2005-10.org.freenas.ctl/delta-proxmox-target-lun-0

I have tried many permutations of the `iscsi://` protocol URI but could not figure out the required format of this (I've used info from here as a guide).
Any clues what I might be doing wrong?
 
I must admit I find that a bit surprising.
Thats because it seems that we are not on the same page technology wise.
Please correct me if I am wrong, but I would expect leveraging iSCSI storage would be more of an enterprise feature that would be very well supported.
And it is _very well_ supported. Although Microsoft might arguably have a fuller iSCSI implementation, Linux (on which PVE is based) has had support for iSCSI for around 20 years.
You seem to be confusing iSCSI, a protocol designed for block data transfer and is ratified by RFCs, with vendor specific APIs responsible for metadata operations around proprietary disk manipulations (create, delete, expand, shrink, snapshot, clone, etc).

Have you ever heard of VMWare VVols? Assuming that you are familiar with technology, can you name one storage for which _VMware_ wrote VVol support, excluding VMware's VSan?
The answer is - such storage doesnt exist. It has always been storage vendor's responsibility to implement VVol support based on the API made available by VMware.
Proxmox "disk images" are roughly equivalent to VMWare VVols.

Proxmox made their storage subsystem available via API. It is now up to storage vendors to implement the support for their specific storage product. We, Blockbridge, did it by investing time, money, and resources, including 24x7 interoperability testing across multiple PVE versions.

larger scale and using SANs, deploying larger fleets of VMs is basically either a non-trivial lift for a custom script job or just isn't feasible. Imagine deploying a fleet of 100 VMs like this?
That business should use Proxmox with Proxmox-aware storage.

The device you want to be writing to has a path of /dev/xxxx


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Kingneutron

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!