Bug with the proxmox Rest api?

whiggs

New Member
Dec 11, 2024
21
0
1
Hey Everyone. So I think I may have found a bug with the proxmox api. Consider the below powershell script that I wrote which utilizes the Corsinvest.ProxmoxVE.Api Powershell module to create a vm:

Code:
$apikey = 'root@pam!api=Idontthinkso'
$key = Connect-PveCluster -HostsAndPorts 192.168.1.15:8006 -SkipCertificateCheck -ApiToken $apikey
$sata = @{}
$sata.add("0", "fourt:100")
$sata.Add("1", "fourt:0,import-from=/mnt/pve/iso/template/iso/SERVER.VHD")
$ide = @{}
$ide.Add("2", "big:iso/virtio-win-0.1.266.iso,media=cdrom")
$usb = @{}
$usb.Add("0", "spice,usb3=1")
$net = @{}
$net.Add(0, "model=e1000,bridge=vmbr0,firewall=1")
$vmid = 100
New-PveQemu -Memory "4096" -Bios ovmf -Cpu host -Cores 4 -Machine q35 -Node prox -Pool res -Ostype win10 -Scsihw virtio-scsi-pci -Vmid 100 -Name "PB99976-P05" -Storage fourt -Vga "type=qxl,memory=128" -NetN $net -Efidisk0 "fourt:1,efitype=4m,pre-enrolled-keys=1" -Agent "1" -SataN $sata -IdeN $ide -Boot "order=sata1;sata0;ide2" -UsbN $usb

As you can see, one of the disks that I have defined imports an existing disk image. I point this out simply because it is the part of the script that is referenced in the error message. When I attempt to run the above script, the proxmox api returns the following error:

Code:
Only root can pass arbitrary filesystem paths. at /usr/share/perl5/PVE/Storage.pm line 561.

The path that the error is referring to is the path I passed to the "import-from" property when creating one of the disks. I find this error odd because, as you can see for yourself, the api key that I use in my script (which I redacted of course) was created to authenticate the root user. As a test, I attempted running the above script again, except I authenticated to the proxmox server using the root accounts actual credentials, and the script worked perfectly. So, it would seem, as far as the permissions needed to "pass an arbitrary filesystem path" to the proxmox api, authenticating to the proxmox server using an api key generated by the root user does not have the permissions of the root user.
 
Hello!

It seems that this is a known bug/problem and that there is a patch for it that never really got applied.
 
Hello!

It seems that this is a known bug/problem and that there is a patch for it that never really got applied.
Thank you for your reply. OK. Well, I am glad that the proxmox team is aware that this is a bug, but can I get you to clarify what you mean when you say "there is a patch that never got applied?" Because when I read that, what I hear is "there is a patch that fixes the issue, but for reasons that I am sure are super duper important, nobody has bothered to make the publicly available." Is that accurate?
 
You should be able to circumvent needing the root user by moving the volume to the import directory of the storage you're using. You can then refer to your volume in import-from via its volume ID.

An example of this would be:
Code:
import-from=<storage>:100/vm-100-disk-0.qcow2
 
You should be able to circumvent needing the root user by moving the volume to the import directory of the storage you're using. You can then refer to your volume in import-from via its volume ID.

An example of this would be:
Code:
import-from=<storage>:100/vm-100-disk-0.qcow2

That really doesn't address the root problem though, does it? And you did not answer my question. If proxmox is going to be a product that is marketed as an enterprise tool, there is an expectation of consistency. Furthermore, your workaround does not work for my particular scenario. Take a look at the file specified in the "import-from" property. The file has an extention of VHD, due to the fact that the image was captured using Microsoft Sysinternal's Disk2vhd tool. I believe that it is due to the file type that proxmox does not view it as a valid disk image, and therefore does not allow me to shorten the path in the manner you specified. I normally have to convert the image from vhd to a format that proxmox recognizes. Take a look at what happens when I try. The vhd in question is stored in the directory where all my iso files are stored:

r8UKhHI.png
JaNPFO0.png

So, the vhd files are stored on the "big" datastore, right? Yet, when I try specify the object in the manner you specified (I have tried both import-from=big:iso/SERVER.VHD" and import-from=big:templates/iso/SERVER.VHD", I get the error "unable to parse directory volume name."
 
Last edited:
How to use Disk Images as templates for VMs

Code:
/var/lib/vz/images/999/ubuntu-24.04-minimal-cloudimg-amd64.qcow2

scsi0=pool0:0,import-from=local:999/ubuntu-24.04-minimal-cloudimg-amd64.qcow2
  1. Images MUST have the qcow2 suffix (or have a --format qcow2 flag or format=qcow2 parameter)
  2. Images MUST be in Disk Image storage (NOT ISO image or Container template storage)
    (ISO image storage will icorrectly report .img files as ISOs)
  3. You MUST have a dummy VM
    • qm create 999 # defaults, no storage, not started, no start at boot
    • 999 is used conventionally as the vmid for this purpose
    • the image does NOT need to be imported or referenced by the VM
    • the VM MUST exist on the node on which the real VM is being created, even if the storage is shared
  4. The size must be set to 0
    (for example: scsi0=pool0:0,import-from=...)


Disk Images vs ISOs & Templates

You should be able to see which pools and files are available via pvesm. But notice that image types don't show up in ISO image or Container template storage types.

Code:
# common file path locations for different storage types

/var/lib/vz/template/iso/
/var/lib/vz/template/cache/
/var/lib/vz/images/999/
/mnt/pve/cephfs0/template/iso/
/mnt/pve/cephfs0/template/cache/
/mnt/pve/cephfs0/images/999/
/tank1/images/

Bash:
pvesm status

Name                        Type     Status           Total            Used       Available        %
cephfs0                   cephfs     active      1997148160        36933632      1960214528    1.85%
cephfs0-images               dir     active      1994551296        37851136      1956700160    1.90%
local                        dir     active        20466256        10542572         8858724   51.51%
local-lvm                lvmthin     active        74178560               0        74178560    0.00%
pbs1                         pbs     active      2112646044        41678028      1963577504    1.97%
tank1                    zfspool     active      1511784448        39010800      1472773648    2.58%

Bash:
pvesm list cephfs0

Volid                                                             Format  Type            Size VMID
cephfs0:iso/alpine-virt-3.22.1-x86_64.iso                         iso     iso         68157440
cephfs0:iso/ubuntu-24.04-minimal-cloudimg-amd64.img               iso     iso        255393792
cephfs0:vztmpl/devuan-5.0-standard_5.0_amd64.tar.gz               tgz     vztmpl     115968289
cephfs0:vztmpl/rockylinux-9-default_20240912_amd64.tar.xz         txz     vztmpl     104371140
cephfs0:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst        tzst    vztmpl     141589318

Bash:
pvesm list cephfs0-images

Volid                                                            Format  Type            Size VMID
cephfs0-images:999/ubuntu-24.04-minimal-cloudimg-amd64.qcow2     qcow2   images    3758096384 999

API Example

Bash:
g_vmid=1234
curl --fail-with-body 'https://pvec-dc1.example.com/api2/json/nodes/pve3/qemu' \
    -H "Authorization: PVEAPIToken=${PROXMOX_TOKEN_ID}=${PROXMOX_TOKEN_SECRET}" \
    -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' \
    --data-urlencode "vmid=${g_vmid}" \
    --data-urlencode 'name=ai-runner' \
    --data-urlencode 'pool=my-pool' \
    --data-urlencode 'onboot=1' \
    --data-urlencode 'ide2=none,media=cdrom' \
    --data-urlencode 'ostype=l26' \
    --data-urlencode 'machine=q35' \
    --data-urlencode 'bios=ovmf' \
    --data-urlencode 'scsihw=virtio-scsi-single' \
    --data-urlencode 'agent=1' \
    --data-urlencode 'efidisk0=pool-ex1:1,efitype=4m,pre-enrolled-keys=0' \
    --data-urlencode 'scsi0=pool-ex1:0,import-from=cephfs0-images:999/ubuntu-24.04-minimal-cloudimg-amd64.qcow2,discard=on,ssd=on,iothread=on' \
    --data-urlencode 'scsi1=pool-ex1:20,discard=on,ssd=on,iothread=on' \
    --data-urlencode 'sockets=2' \
    --data-urlencode 'cores=20' \
    --data-urlencode 'numa=1' \
    --data-urlencode 'cpu=x86-64-v2-AES' \
    --data-urlencode 'memory=49152' \
    --data-urlencode 'net0=virtio,bridge=ex1vnet,firewall=1'

Troubleshooting

Why is the imported volume is much larger?

Bash:
qemu-img info /mnt/pve/cephfs0/images/999/ubuntu-24.04-minimal-cloudimg-amd64.qcow2

file format: qcow2
virtual size: 3.5 GiB (3758096384 bytes)
disk size: 244 MiB
...

The download size is the used disk size (i.e. files written), but the import size will be the virtual size (i.e. the extents of the partition table). If imported into a thin volume, it will still only take up the used size even though it reports the full size.

Error: format 'qcow2' is not supported by the target storage - using 'raw' instead

qcow2 is only supported on `dir` storage types (a path, or an ext4 volume or an nfs mount, but NOT Ceph or ZFS). You can create a dir storage type by adding any mounted path (such as /mnt/pve/cephfs0 or /tank1) under Datacenter Storage.

Error: unable to parse directory volume name 'ubuntu-24.04-minimal-cloudimg-amd64.qcow2'

`image` storage always requires a vmid that's associated with an existing vm and the extension must be .qcow2 (NOT .img).

/mnt/pve/cephfs0/images/999/ would correspond to vm 999 and the import-from=cephfs0-images:999/ubuntu-24.04-minimal-cloudimg-amd64.qcow2, such as:

Code:
scsi0=pool0:0,import-from=cephfs0-images:999/ubuntu-24.04-minimal-cloudimg-amd64.qcow2,discard=on,ssd=on,iothread=on

Error: scsi0: 'import-from' requires special syntax - use <storage ID>:0,import-from=<source>

You can't specify the disk size. You must set it to 0. It will be imported as the full virtual size of the image. To change that, you must `qemu-img resize` it before or after you import it, or grow it after its been imported.

Error: scsi0: cephfs0:iso/ubuntu-24.04-minimal-cloudimg-amd64.qcow2 has wrong type 'iso' - needs to be 'images' or 'import'

You can't use template storage types (ISO Image, Container Template), it must be a Disk Image type storage, which can simply be any mounted storage added by its path (such as /mnt/pve/cehpfs0) under Datacenter Storage.

Error: Permission check failed (/vms/999, VM.Clone)

The token, group, or user needs PVETemplateUser on /vms/999.

TASK ERROR: unable to create VM 101 - cannot import from 'cephfs0-images:999/ubuntu-24.04-minimal-cloudimg-amd64.qcow2' - owner VM 999 not on local node

Even if the underlying storage is shared or replicates, each node needs its own dummy VM with its own id.

Perhaps 99901 for pve1, 99902 for pve2, etc.

HTH

Screenshot 2025-09-05 at 12.35.20 PM.pngScreenshot 2025-09-05 at 1.24.58 PM.png
 
Last edited:
The last time I had to deal with importing QCOW files the "qm importdisk" was not hooked up to PVE storage pools. I.e. one had to use absolute paths for import. Which is only available to root.

The template/iso location is somewhat "old-fashioned", i.e. legacy oriented. It is not a suitable place for vhd,qcow, etc. :
Code:
root@pve9r1-nvme-host2:~# pvesm list local
Volid                                                  Format  Type         Size VMID
local:iso/file.iso                                     iso     iso             0
local:vztmpl/alpine-3.14-default_20210623_amd64.tar.xz txz     vztmpl    2495924
local:vztmpl/alpine-3.18-default_20230607_amd64.tar.xz txz     vztmpl    2983844

root@pve9r1-nvme-host2:~# ls -al /var/lib/vz/template/iso/
total 8
drwxr-xr-x 2 root root 4096 Sep  5 13:41 .
drwxr-xr-x 4 root root 4096 Jun 20  2024 ..
-rw-r--r-- 1 root root    0 Sep  5 13:38 file1.vhd
-rw-r--r-- 1 root root    0 Sep  5 13:41 file2.qcow
-rw-r--r-- 1 root root    0 Sep  5 13:38 file.iso


Based on the following description, import-from does allow both Pool and Absolute specification:
Code:
my %import_from_fmt = (
    'import-from' => {
        type => 'string',
        format => 'pve-volume-id-or-absolute-path',
        format_description => 'source volume',
        description => "Create a new disk, importing from this source (volume ID or absolute "
            . "path). When an absolute path is specified, it's up to you to ensure that the source "
            . "is not actively used by another process during the import!",
        optional => 1,
    },
);


@whiggs for reasons you can read comment 12 in the bug


Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox