[SOLVED] [Workaround] How to refer via API to a downloaded image file stored in the images folder?

eltydari

New Member
Jan 22, 2024
8
1
3
I am trying to automate creation of my Ubuntu VM in Proxmox using Ansible, proxmoxer, and cloudinit.

I've seen (old) tutorial videos download the Ubuntu cloudinit IMG file to the /var/lib/vz/images path and then successfully use the image file by using the volume name import-from=local:images/noble-server-cloudimg-amd64.img to initialize the attached hard drive. However when I try this myself I get 500 Internal Server Error: unable to parse directory volume name 'images/noble-server-cloudimg-amd64.img'

What am I doing wrong? I know it has something to do with the right way of referring to the /var/lib/vz/images path, but documentation on this is pretty sparse and I've tried multiple ways of reaching this folder without any success.
 
import-from=local:images/noble-server-cloudimg-amd64.img
This does not look right & should probably be:
Code:
import-from=local:noble-server-cloudimg-amd64.img

# or

import-from=/var/lib/vz/images/noble-server-cloudimg-amd64.img

Disclaimer: I don't do anything like this, I'm just using intuition.
 
This does not look right & should probably be:
Code:
import-from=local:noble-server-cloudimg-amd64.img

# or

import-from=/var/lib/vz/images/noble-server-cloudimg-amd64.img

Disclaimer: I don't do anything like this, I'm just using intuition.
Thanks for the response! Unfortunately this (first one) did not work either (first screenshot attached)

For the second one, the background is that I am trying to use proxmoxer i.e. the Proxmox REST API to create the VM, and it seems you need to be root inside the Proxmox node in order to use absolute paths (see second screenshot)
 

Attachments

  • Screenshot 2025-01-30 154704.png
    Screenshot 2025-01-30 154704.png
    30.2 KB · Views: 5
  • Screenshot 2025-01-30 155206.png
    Screenshot 2025-01-30 155206.png
    27.8 KB · Views: 6
Last edited:
Proxmox REST API to create the VM, and it seems you need to be root inside
I see what you are trying to do. That seems correct like this.

However, I will just point out, that that location you used on local in /var/lib/vz/images/ is in fact the local storage location for VM disk images (assuming you have the local backend storage setup for disk images type) & then following the directory format of /var/lib/vz/images/{VMID}/

What ought to (should have!) work in your case - save that image file in /var/lib/vz/template/iso/ & then in your API config setup use:
import-from=local:iso/noble-server-cloudimg-amd64.img

BUT - this will also probably fail, as it will complain that the file has wrong ISO type.

As far as I can see, the API backend (that does not have root permissions) has not (as of yet) a working implementation for what you want. Read this thread.


A possible workaround:

1. On your node, create a "bogus" VM, lets call it VMID 999. Create it without any disks.

2. Use qm importdisk 999 {path}/noble-server-cloudimg-amd64.img

3. Check the file name of that VM 999's disk that is created. So maybe ls /var/lib/vz/images/999 (if you are using the local storage backend for VMs) will output: vm-999-disk-0.qcow2or something like that.

4. Then in your API backend config you can use:
import-from=local:vm-999-disk-0.qcow2

This workaround should work, & you could even add all your images that you use to that "bogus" VM & then use them accordingly.

The above is nothing but a workaround, & I must agree; I would have expected PVE themselves to have incorporated an image import facility within the API backend itself.
 
Thanks! I'll try your suggestion.
& I must agree; I would have expected PVE themselves to have incorporated an image import facility within the API backend itself.
Yeah. I looked through that thread link you posted and it seems someone asked about this very thing in August and it's still in progress.
 
Keep us posted if you are successful & maybe change the thread title to:

How to refer via API to a downloaded image file stored in the images folder?​

 
  • Like
Reactions: Johannes S
Keep us posted if you are successful & maybe change the thread title to:

How to refer via API to a downloaded image file stored in the images folder?​

Thanks! Changed. Should I mark this thread as resolved (edit: after i test your workaround) or leave as is until a real solution is found?
 
or leave as is until a real solution is found?
That is a good question. The Oxford dictionary (Google) defines solved by 3 options: "find an answer to, explanation for, or means of effectively dealing with (a problem or mystery)". I guess it depends on which one of these options you choose!

I would consider this is as of yet unsolved. But the choice is yours - not mine.
 
  • Like
Reactions: Johannes S
Ok now I'm running up against a different problem. Seems that this new disk (it's a RAW file) I imported into this bogus VM has a weird file state. When I try to use it to create my new VM, I get hit with this (this is coming from my Proxmox web UI):

Code:
Formatting '/var/lib/vz/images/103/vm-103-cloudinit.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=4194304 lazy_refcounts=off refcount_bits=16
ide2: successfully created disk 'local:103/vm-103-cloudinit.qcow2,media=cdrom'
failed to stat '/var/lib/vz/images/1000/base-1000-disk-0.raw'
TASK ERROR: unable to create VM 103 - cannot import from 'local:1000/base-1000-disk-0.raw' - could not get size of local:1000/base-1000-disk-0.raw

I've made sure that the file exists in the proper location.
 
First up based on this listing, it would appear that noble-server-cloudimg-amd64.img should be a "QCow2 UEFI/GPT Bootable disk image".

I am going to assume VM 103 is the VM you are trying to create via API & VM 1000 is the "bogus" VM we discussed above.

For my understanding:
'local:103/vm-103-cloudinit.qcow2,media=cdrom'
You are attaching a cloudinit cdrom to the VM 103 in addition to the requested disk you wish to attach via import-from VM 1000.
Is this your intention & why?

I'm not sure why you are showing that filename base-1000-disk-0.raw , AFAIK "base" is normally associated with clones & templates.
This possibly may have been caused, because you have not correctly "attached" this disk to the VM 1000.

Could you show the following:

1. Exact command you used to qm importdisk that noble-server-cloudimg-amd64.img to the bogus VM 1000.
2. Output for: qm config 1000 .
3. Output for: ls /var/lib/vz/images/1000/ .
4. Output for: cat /etc/pve/storage.cfg .

Thanks.
 
First up based on this listing, it would appear that noble-server-cloudimg-amd64.img should be a "QCow2 UEFI/GPT Bootable disk image".
You're right, I didn't see that one. However, after changing the format, it seems it still isn't working. Somehow from my testing, it seems that PVE just isn't able to find the file. It seems this way because even if the file doesnt exist, the error output is the same...

You are attaching a cloudinit cdrom to the VM 103 in addition to the requested disk you wish to attach via import-from VM 1000.
Is this your intention & why?
According to the cloudinit docs (I may be misunderstanding), it seems you need both the cloudinit ISO and the vendor disk in order to initialize the VM correctly. However, I think this isn't relevant to the current issue I'm facing; I removed the ISO file and the error is the the same.

I'm not sure why you are showing that filename base-1000-disk-0.raw , AFAIK "base" is normally associated with clones & templates.
This possibly may have been caused, because you have not correctly "attached" this disk to the VM 1000.
This is because I converted my bogus VM into a template thinking that it can lead to a more "permanent" looking bogus VM... but even after I converted it back to a regular vm "vm-1000-disk-0.qcow", this is still what I get:

Code:
failed to stat '/var/lib/vz/images/1000/vm-1000-disk-0.qcow2'
TASK ERROR: unable to create VM 103 - cannot import from 'local:1000/vm-1000-disk-0.qcow2' - could not get size of local:1000/vm-1000-disk-0.qcow2

Could you show the following:

1. Exact command you used to qm importdisk that noble-server-cloudimg-amd64.img to the bogus VM 1000.
2. Output for: qm config 1000 .
3. Output for: ls /var/lib/vz/images/1000/ .
4. Output for: cat /etc/pve/storage.cfg .

1. This is the command I used and the output:
Code:
root@compute:~# qm disk import 1000 /var/lib/vz/images/noble-server-cloudimg-amd64.img local --format qcow2
importing disk '/var/lib/vz/images/noble-server-cloudimg-amd64.img' to VM 1000 ...
Formatting '/var/lib/vz/images/1000/vm-1000-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=3758096384 lazy_refcounts=off refcount_bits=16
transferred 0.0 B of 3.5 GiB (0.00%)
transferred 35.8 MiB of 3.5 GiB (1.00%)
...
transferred 3.5 GiB of 3.5 GiB (100.00%)
unused0: successfully imported disk 'local:1000/vm-1000-disk-0.qcow2'

2.
Code:
root@compute:~# qm config 1000
boot: order=ide2
cores: 1
cpu: x86-64-v2-AES
ide2: none,media=cdrom
memory: 2048
meta: creation-qemu=9.0.2,ctime=1738552656
name: cloudinit
numa: 0
ostype: l26
scsihw: virtio-scsi-single
smbios1: uuid=7783c822-4639-4e92-b597-1e28f3e2fa94
sockets: 1
unused0: local:1000/vm-1000-disk-0.qcow2
vmgenid: 0d1023eb-4218-4625-8571-f5e74f3387dc

3.
Code:
root@compute:~# ls /var/lib/vz/images/1000/
vm-1000-disk-0.qcow2

4.
Code:
root@compute:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,images,vztmpl,snippets,iso
        shared 1

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir
        nodes compute

zfspool: local-zfs
        pool rpool/data
        content rootdir,images
        nodes web
        sparse 1
 
Last edited:
Ah wow, sorry! It seems this was a mistake on my end... I have two nodes and I accidentally targeted the wrong node for creation of my vm, the one I did not have my bogus VM on :')

I tried spinning up the VM on the same node and it worked! Thanks so much for your help and sorry for the confusion haha.

I'm going to mark this thread as resolved, because I think for this specific case there's at least a workaround. There seems to be other threads more suited to the root cause of the problem that look to still be open.
 
Last edited:
  • Like
Reactions: gfngfn256
Happy my workaround helped you, maybe add the word [Workaround] to the beginning of the Thread Title. This way other users when searching this thread will see/understand the implication of [Solved]!

BTW - after adding it to the bogus VM you could delete the original noble-server-cloudimg-amd64.img to save space! Just make a note in the bogus VM of which disk it actually is.

Edit: Just to enhance the "workaround" a little so that the name of the image does not get changed, instead of using qm importdisk, you could just move the original file to the bogus VM location & then issue a qm rescan {VMID}, (so in your case that would be mv {path}noble-server-cloudimg-amd64.img /var/lib/vz/images/1000/ , & then qm rescan 1000). This way, the original name remains intact, so then in the API import you would just use import-from=local:{original_file_name} , (so in your case that maybe import-from=local:noble-server-cloudimg-amd64.img . In your case you may need to change that .img file to a .qcow2, but I'm not 100% sure it is necessary).
 
Last edited:
  • Like
Reactions: eltydari
Edit: Just to enhance the "workaround" a little so that the name of the image does not get changed, instead of using qm importdisk, you could just move the original file to the bogus VM location & then issue a qm rescan {VMID}, (so in your case that would be mv {path}noble-server-cloudimg-amd64.img /var/lib/vz/images/1000/ , & then qm rescan 1000). This way, the original name remains intact, so then in the API import you would just use import-from=local:{original_file_name} , (so in your case that maybe import-from=local:noble-server-cloudimg-amd64.img . In your case you may need to change that .img file to a .qcow2, but I'm not 100% sure it is necessary).
Actually this edit is perfect for my Ansible use case; importing the disk actually created a bit of a headache because Ansible has no idea whether or not the disk was already created. With this, I can just download the disk directly into the VM directory once for Ansible to keep track of.

And yes, after testing, I can confirm the file needs to be .qcow2 for PVE to recognize it.
 
Last edited: