VM clone (cloud-init) does not boot

artkrz

New Member
Sep 21, 2022
8
2
3
Hello,

I'm trying to create a vm template to use for spinning VMs:

Bash:
#!/bin/bash
# download the image
wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img
# create a new VM
qm create 9000 --memory 2048 --net0 virtio,bridge=vmbr0
# import the downloaded disk to local-lvm storage
qm importdisk 9000 jammy-server-cloudimg-amd64.img local-lvm
# finally attach the new disk to the VM as scsi drive
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
# configure a CDROM drive, used to pass the
qm set 9000 --ide2 local-lvm:cloudinit
# boot directly from the Cloud-Init image
qm set 9000 --boot c --bootdisk scsi0
# configure a serial console and use that as display
qm set 9000 --serial0 socket --vga serial0
# transform VM into a template.
qm template 9000
# deploy Cloud-Init Templates
qm clone 9000 123 --name ubuntu-test
qm set 123 --sshkey artkrz.pub
qm set 123 --ipconfig0 ip=dhcp

The script is based on https://git.proxmox.com/?p=pve-docs...c011c659440e4b4b91d985dea79f98e5f083c;hb=HEAD

Unfortunately, when I boot the 123 VM it halts at booting from the hard drive... anything I missed?
 
Nothing appear to be missed. Can you simplify the steps and just boot 9000 rather than converting it to template?
Have you checked the md5sum of the image you downloaded? If the VM gets stuck very early in boot its often due to image inconsistency. Try a different image.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
what is the final context of "qm config 9000" or "qm config 123"?
We do exactly your workflow hundred times a day - so I know it works. Something in your environment is different.
Try a different image, different OS - its not unheard of having bad images in the wild for a bit.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Code:
boot: c
bootdisk: scsi0
ide2: local-lvm:vm-9000-cloudinit,media=cdrom
memory: 2048
meta: creation-qemu=6.2.0,ctime=1663785492
net0: virtio=96:2E:E7:E8:35:73,bridge=vmbr0
scsi0: local-lvm:vm-9000-disk-0,size=2252M
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=62ed2cd8-e098-4357-b9ba-4b18c61d14bb
vga: serial0
vmgenid: cb4b2dae-1c35-4ad1-a04e-4965eb50a328
 
I'm running:
Code:
 pve-manager/7.2-7/d0dd0e85 (running kernel: 5.4.195-1-pve)
maybe something went bad when I upgraded from 6.x? It definitely worked a while ago before I upgraded.
 
I'm running:
Code:
 pve-manager/7.2-7/d0dd0e85 (running kernel: 5.4.195-1-pve)
maybe something went bad when I upgraded from 6.x? It definitely worked a while ago before I upgraded.

Either you pinned that kernel version for a specific reason (which one?) or you simply did not reboot the PVE-host after the major upgrade from 6 to 7. If it is the latter, please reboot the PVE-host to boot with the new kernel.
 
@Neobin I rebooted it and after seeing this on screen:

IMG_4275.jpg

and prompt shows I'm on VM9000? I can SSH to it and I see it mounted:

Code:
# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=16346956k,nr_inodes=4086739,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=3276416k,mode=755,inode64)
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino
=19509)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
/dev/nvme0n1p2 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,e
rrors=remount-ro)
/dev/sda on /backups type ext4 (rw,relatime,errors=remount-ro)
/var/lib/snapd/snaps/core_13425.snap on /snap/core/13425 type squashfs (ro,nodev,relatime,errors=continue,x-gdu.hide)
/var/lib/snapd/snaps/core_13741.snap on /snap/core/13741 type squashfs (ro,nodev,relatime,errors=continue,x-gdu.hide)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=3276412k,nr_inodes=819103,mode=700,uid=1000,gid=1000,inod
e64)


# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                   16G     0   16G   0% /dev
tmpfs                 3.2G  1.5M  3.2G   1% /run
/dev/mapper/pve-root   94G   16G   74G  18% /
tmpfs                  16G     0   16G   0% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
/dev/nvme0n1p2        511M  312K  511M   1% /boot/efi
/dev/sda              175G   88G   79G  53% /backups
/dev/loop0            114M  114M     0 100% /snap/core/13425
/dev/loop1            115M  115M     0 100% /snap/core/13741
tmpfs                 3.2G  4.0K  3.2G   1% /run/user/1000

in /etc/hosts:

Code:
# Your system has configured 'manage_etc_hosts' as True.
# As a result, if you wish for changes to this file to persist
# then you will need to either
# a.) make changes to the master file in /etc/cloud/templates/hosts.debian.tmpl
# b.) change or remove the value of 'manage_etc_hosts' in
#     /etc/cloud/cloud.cfg or cloud-config from user-data
#
127.0.1.1 VM9000 VM9000
127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

wtf happened here?
 
Hi,
seems like you installed the cloud-init package on the Proxmox VE host. The package is only intended to be used within guests. You should remove it again.
 
  • Like
Reactions: Neobin
@fiona yes! I think I did instead of on a VM, I removed it now but it seems I have to revoke changes it made by hand. Hostname and network config are easy but what else did it change?

P.S. After fixing the hostname and network config it booted fine. Thank You @fiona Any tips on what else I should look into would be appreciated.
 
Last edited:
Glad to hear :) Well, I'm actually not sure myself either, but maybe DNS settings and ssh keys? At least those are the cloud-init settings that we expose to in the web UI.
 
I asked this question on another thread a while ago, but unfortunately never got an answer:
Yeah, the question is, how did you come to the idea to install it on the host?
Is there something unintuitive or even wrong described in the official Proxmox documentation/wiki, that lead you to do this step?
If so, maybe it could be improved.

Since this "problem" (rarely) comes up now and then, it would be interesting to know, why it comes to it at all.

@fiona Maybe a big/bold hint/warning in the documentation/wiki could not hurt anyway?! :)
 
@Neobin I'm an old sysadmin, I should know better but I did this very late in the day after a long day of work and just did not read it. Maybe a big/bold warning would be a good change.
 
  • Like
Reactions: Neobin
I asked this question on another thread a while ago, but unfortunately never got an answer:


Since this "problem" (rarely) comes up now and then, it would be interesting to know, why it comes to it at all.

@fiona Maybe a big/bold hint/warning in the documentation/wiki could not hurt anyway?! :)
I mean, the text clearly mentions that it's supposed to be installed in the VM. Nevertheless, I sent a patch for discussion.
 
  • Like
Reactions: Neobin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!