using cloud images

CarlFK

New Member
Sep 20, 2022
12
0
1
What are the minimum steps to use an image like:
https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-genericcloud-amd64.qcow2

I found:
https://forum.proxmox.com/threads/new-vm-from-cloud-init-image-via-api.111091/
"There is no dedicated place in Proxmox for qcow/raw/etc type pre-built images (really templates), like the ones provided by every Linux distro for cloud deployments."
and ER:
https://bugzilla.proxmox.com/show_bug.cgi?id=4141 allow disk/VM import from uploaded/downloaded images/ovf via API

It seems there isn't support for it right now (v 8.1.3) even in the web ui, like there isn't a place to put a qcow url like there is for iso files.

I want to start simple and add features later.
Like this post seemed like what I was looking for (well, its from 2021, so not pve v8):
https://codingpackets.com/blog/proxmox-import-and-use-cloud-images/
"libguestfs-tools package allows you to install packages into an image without booting it up. The libguestfs-tools package conflicts with Proxmox."

Yeah, none of that for now please.

so on the host, I'm guessing I start with
Bash:
wget https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-genericcloud-amd64.qcow2

Then what?

I expect something about a cloud-init file. From what I have found: I create a config file, create an .iso containing that file, I think I can find those steps. If we can skip this for now and boot the image, good. If I can't log in or anything because I have not configured a user, at least I know I have completed the first few steps successfully.
 
Try this: create a "normal" VM - without assigning a new disk. Then import your downloaded file via
Code:
~# qm  disk import 102 debian-12-genericcloud-amd64.qcow2 local-zfs
importing disk 'debian-12-genericcloud-amd64.qcow2' to VM 102 ...
transferred 0.0 B of 2.0 GiB (0.00%)           
transferred 20.5 MiB of 2.0 GiB (1.00%)                            
transferred 41.2 MiB of 2.0 GiB (2.01%)
...
Successfully imported disk as 'unused0:local-zfs:vm-102-disk-0'

Now go to your VM --> Hardware and add that now visible "Unused Disk 0".

The next (and last) step is to enable "scsci0" in vm --> Options --> Boot Order.

Good luck :-)
 
root@pm2:/home/videoteam/clouds# qm disk import 103 debian-12-genericcloud-amd64.qcow2 local
importing disk 'debian-12-genericcloud-amd64.qcow2' to VM 103 ...
Formatting '/var/lib/vz/images/103/vm-103-disk-0.raw', fmt=raw size=2147483648 preallocation=off
transferred 0.0 B of 2.0 GiB (0.00%)
transferred 20.5 MiB of 2.0 GiB (1.00%)
...
Successfully imported disk as 'unused0:local:103/vm-103-disk-0.raw'

Added it, made it bootable, (turned off the ide and pxe devies)

Start the VM, see: "No bootable device. retrying in 5 seconds..."
and it loops.s
 
Zum Vergleich mein Dummy per ~# cat /etc/pve/local/qemu-server/102.conf:
Code:
boot: 
cores: 1
cpu: x86-64-v2-AES
ide2: none,media=cdrom
memory: 2048
meta: creation-qemu=8.1.2,ctime=1703186721
name: asdf
net0: virtio=BC:24:11:CC:75:5A,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-zfs:vm-102-disk-0,iothread=1,size=2G
scsihw: virtio-scsi-single
smbios1: uuid=60ff2b5b-e37a-486d-b02e-d98a25ac3922
sockets: 1
vmgenid: 0ccb9db3-e77a-4634-89d4-7a75008a3531
 
Code:
root@pm2:~# cat /etc/pve/local/qemu-server/103.conf
boot: order=scsi0
cores: 1
cpu: x86-64-v2-AES
ide2: none,media=cdrom
memory: 8096
meta: creation-qemu=8.1.2,ctime=1703203746
name: cloud1
net0: virtio=BC:24:11:1F:8F:FC,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local:103/vm-103-disk-1.qcow2,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=47af74cc-0fae-4791-9a34-c3f28cb98632
sockets: 1
unused0: local:103/vm-103-disk-0.raw
vmgenid: 98edc819-9040-435e-b5dc-aad7e260a480
 
update: it boots.

I replaced
Code:
scsi0: local:103/vm-103-disk-1.qcow2,iothread=1,size=32G
unused0: local:103/vm-103-disk-0.raw
with
Code:
scsi0: local:103/vm-103-disk-0.raw,iothread=1,size=32G

full file:
Code:
root@pm2:~# cat /etc/pve/local/qemu-server/103.conf
boot: order=scsi0;ide2;net0
cores: 1
cpu: x86-64-v2-AES
ide2: none,media=cdrom
memory: 8096
meta: creation-qemu=8.1.2,ctime=1703203746
name: cloud1
net0: virtio=BC:24:11:1F:8F:FC,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local:103/vm-103-disk-0.raw,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=47af74cc-0fae-4791-9a34-c3f28cb98632
sockets: 1
vmgenid: 98edc819-9040-435e-b5dc-aad7e260a480


So, I have a login prompt. Not exactly sure what the right way to get this far is, but I'll sort that later.

Now what?
cloud-init time, right?
 
Nice.

Code:
./download-cloud-image.sh https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2
good.
Code:
./new-vm.sh 104 --image /var/lib/vz/template/iso/debian-12-generic-amd64.qcow2 --name c1 --sshkey ~/.ssh/id_rsa.pub

It gets to here:
Code:
pvesm alloc local-zfs 104 vm-104-efi 1M
storage 'local-zfs' does not exist

I guessed at a fix:

Code:
pvesm alloc local 104 vm-104-efi 1M
unable to parse volume filename 'vm-104-efi'

Should I log an issue on github?
 
Code:
videoteam@pm2:~/Proxmox-Automation$ sudo pvesm alloc local 104 vm-104-disk-0.raw 1M
Formatting '/var/lib/vz/images/104/vm-104-disk-0.raw', fmt=raw size=1048576 preallocation=off
successfully created 'local:104/vm-104-disk-0.raw'


Code:
root@pm2:~# pvesm alloc local 104 foo --format raw 1M
unable to parse volume filename 'foo'

I guess it wants .raw?
 
Code:
pvesm alloc local-zfs 104 vm-104-efi 1M
storage 'local-zfs' does not exist
Sorry, Carl. Up until now, I've only used Proxmox VE with ZFS filesystems.

From the error message, I presume you're using something different? LVM, maybe?
 
I made it work, maybe.

Code:
# Disk 0: EFI
-pvesm alloc local-zfs $VM_ID vm-$VM_ID-efi 1M
-qm set $VM_ID --efidisk0 local-zfs:vm-$VM_ID-efi
+pvesm alloc local $VM_ID vm-$VM_ID-efi.raw 1M
+qm set $VM_ID --efidisk0 /var/lib/vz/images/${VM_ID}/vm-$VM_ID-efi.raw
 
 # Disk 1: Main disk
-qm importdisk $VM_ID $VM_IMAGE local-zfs
-qm set $VM_ID --scsi1 local-zfs:vm-$VM_ID-disk-0,discard=on,iothread=1,ssd=1 \
+qm importdisk $VM_ID $VM_IMAGE local
+qm set $VM_ID --scsi1 /var/lib/vz/images/${VM_ID}/vm-$VM_ID-disk-0.raw,discard=on,iothread=1,ssd=1 \
     --boot c \
     --bootdisk scsi1
+
 qm resize $VM_ID scsi1 $VM_DISKSIZE
 
 # Disk 2: cloud-init
-qm set $VM_ID --ide2 local-zfs:cloudinit
+qm set $VM_ID --ide2 local:cloudinit


https://github.com/CarlFK/Proxmox-Automation/blob/master/new-vm.sh


however:

Code:
+ qm cloudinit dump 104 user
+ INTERFACE_NAME=eth0
+ cat
+ '[' 0 -eq 1 ']'
+ qm start 104
generating cloud-init ISO
+ echo 'Waiting for VM 104...'
Waiting for VM 104...
+ qm agent 104 ping
QEMU guest agent is not running
+ sleep 2
+ qm agent 104 ping
QEMU guest agent is not running

The VM did boot, then reboot and get to a login:

but I never gave it a clout-init.yml file so I'm ... lost.
 
oh look, it worked :D
I'd say this answers my question, elegantly even.
thank you.

I'll look into making this work with at least zfs and whatever I'm using - ext4 I think. I built a crash test box about a year ago and forget the details.
 
I'll look into making this work with at least zfs and whatever I'm using - ext4 I think.

Cool! But if you are using Proxmox, you may want to look into ZFS.

I've been using it for years and have never looked back.



I built a crash test box about a year ago and forget the details.

Tip: When building a new system, always keep a record of every command you use (at least, the ones that worked ;)).

It may seem like a daunting task at first, but it isn’t. Believe me.



A system that isn’t reproducible is only useful until the first crash. After that, you’re back to square one.

And... If you have to rely on your memory to bring a system up again, you’re going to have a bad time.

This is valid even for "test" systems. It's always useful to have a collection of commands that you know work for future use.



These scripts you see now are largely an evolution of the notes I started taking when I built my first Proxmox system.