I am currently trying to automate the creation of templates for as many cloud int instances as I can find. The issue I have, is that root is disabled in cloud int instances. I found out that I can do a --cicustom script to mitigate this.
With the current set up, I am able to get past the no public key issue, but still does not allow me to log in to root unless I manually go in and edit the sshd_config, so I wanted to see if there was something I Was doing wrong? Below is the script(s)
And here is the 100.yaml and the hookscript. The 100.yaml works fine, its the hookscript that does not seem to put in the sed to allow root login.
And while I am on this subject, if this is fixable, is there a way to have the hookscript also install and enable the qemu agent inside the guest?
With the current set up, I am able to get past the no public key issue, but still does not allow me to log in to root unless I manually go in and edit the sshd_config, so I wanted to see if there was something I Was doing wrong? Below is the script(s)
Code:
#!/bin/bash
#Create template
#args:
# vm_id
# vm_name
# file name in the current directory
function create_template() {
#Print all of the configuration
echo "Creating template $2 ($1)"
#Create new VM
#Feel free to change any of these to your liking
qm create $1 --name $2 --ostype l26
#Set networking to default bridge
qm set $1 --net0 virtio,bridge=vmbr0
#Set display to serial
qm set $1 --serial0 socket --vga serial0
#Set memory, cpu, type defaults
#If you are in a cluster, you might need to change cpu type
qm set $1 --memory 1024 --cores 4 --cpu host
#Set boot device to new file
qm set $1 --scsi0 ${storage}:0,import-from="$(pwd)/$3",discard=on
#Set scsi hardware as default boot disk using virtio scsi single
qm set $1 --boot order=scsi0 --scsihw virtio-scsi-single
#Enable Qemu guest agent in case the guest has it available
qm set $1 --agent enabled=1,fstrim_cloned_disks=1
#Add cloud-init device
qm set $1 --ide2 ${storage}:cloudinit
#Set CI ip config
#IP6 = auto means SLAAC (a reliable default with no bad effects on non-IPv6 networks)
#IP = DHCP means what it says, so leave that out entirely on non-IPv4 networks to avoid DHCP delays
qm set $1 --ipconfig0 "ip6=auto,ip=dhcp"
#Import the ssh keyfile
# qm set $1 --sshkeys ${ssh_keyfile}
#If you want to do password-based auth instaed
#Then use this option and comment out the line above
qm set $1 --cipassword password
#Add the user
qm set $1 --ciuser ${username}
#Resize the disk to 8G, a reasonable minimum. You can expand it more later.
#If the disk is already bigger than 8G, this will fail, and that is okay.
qm disk resize $1 scsi0 8G
qm set $1 --cicustom "user=local:snippets/100.yaml"
qm set $1 --hookscript "local:snippets/secon-hookscript.pl"
#Make it a template
qm template $1
#Remove file when done
rm $3
}
#Path to your ssh authorized_keys file
#Alternatively, use /etc/pve/priv/authorized_keys if you are already authorized
#on the Proxmox system
export ssh_keyfile=/root/id_rsa.pub
#Username to create on VM template
export username=root
#Name of your storage
export storage=main
#The images that I've found premade
#Feel free to add your own
#Bookworm (12 dailies - not yet released)
wget "https://cloud.debian.org/images/cloud/bookworm/daily/latest/debian-12-genericcloud-amd64-daily.qcow2"
create_template 902 "temp-debian-12-daily" "debian-12-genericcloud-amd64-daily.qcow2"
## Ubuntu
#20.04 (Focal Fossa)
wget "https://cloud-images.ubuntu.com/releases/focal/release/ubuntu-20.04-server-cloudimg-amd64.img"
create_template 910 "temp-ubuntu-20-04" "ubuntu-20.04-server-cloudimg-amd64.img"
#22.04 (Jammy Jellyfish)
wget "https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.img"
create_template 911 "temp-ubuntu-22-04" "ubuntu-22.04-server-cloudimg-amd64.img"
#23.04 (Lunar Lobster) - daily builds
wget "https://cloud-images.ubuntu.com/lunar/current/lunar-server-cloudimg-amd64.img"
create_template 912 "temp-ubuntu-23-04-daily" "lunar-server-cloudimg-amd64.img"
## Fedora 37
#Image is compressed, so need to uncompress first
wget https://download.fedoraproject.org/pub/fedora/linux/releases/37/Cloud/x86_64/images/Fedora-Cloud-Base-37-1.7.x86_64.raw.xz
xz -d -v Fedora-Cloud-Base-37-1.7.x86_64.raw.xz
create_template 920 "temp-fedora-37" "Fedora-Cloud-Base-37-1.7.x86_64.raw"
## CentOS Stream
#Stream 8
wget https://cloud.centos.org/centos/8-stream/x86_64/images/CentOS-Stream-GenericCloud-8-20220913.0.x86_64.qcow2
create_template 930 "temp-centos-8-stream" "CentOS-Stream-GenericCloud-8-20220913.0.x86_64.qcow2"
#Stream 9 (daily) - they don't have a 'latest' link?
wget https://cloud.centos.org/centos/9-stream/x86_64/images/CentOS-Stream-GenericCloud-9-20220531.0.x86_64.qcow2
create_template 931 "temp-centos-9-stream-daily" "CentOS-Stream-GenericCloud-9-20220531.0.x86_64.qcow2"
And here is the 100.yaml and the hookscript. The 100.yaml works fine, its the hookscript that does not seem to put in the sed to allow root login.
And while I am on this subject, if this is fixable, is there a way to have the hookscript also install and enable the qemu agent inside the guest?
Code:
#cloud-config
ssh_pwauth: 1
disable_root: 0
chpasswd:
list: |
root:password
Code:
#!/usr/bin/perl
# Exmple hook script for PVE guests (hookscript config option)
# You can set this via pct/qm with
# pct set <vmid> -hookscript <volume-id>
# qm set <vmid> -hookscript <volume-id>
# where <volume-id> has to be an executable file in the snippets folder
# of any storage with directories e.g.:
# qm set 100 -hookscript local:snippets/hookscript.pl
use strict;
use warnings;
print "GUEST HOOK: " . join(' ', @ARGV). "\n";
# First argument is the vmid
my $vmid = shift;
# Second argument is the phase
my $phase = shift;
if ($phase eq 'pre-start') {
# First phase 'pre-start' will be executed before the guest
# is started. Exiting with a code != 0 will abort the start
print "$vmid is starting, doing preparations.\n";
system("sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/ssh_config");
# print "preparations failed, aborting."
# exit(1);
} elsif ($phase eq 'post-start') {
# Second phase 'post-start' will be executed after the guest
# successfully started.
print "$vmid started successfully.\n";
} elsif ($phase eq 'pre-stop') {
# Third phase 'pre-stop' will be executed before stopping the guest
# via the API. Will not be executed if the guest is stopped from
# within e.g., with a 'poweroff'
print "$vmid will be stopped.\n";
} elsif ($phase eq 'post-stop') {
# Last phase 'post-stop' will be executed after the guest stopped.
# This should even be executed in case the guest crashes or stopped
# unexpectedly.
print "$vmid stopped. Doing cleanup.\n";
} else {
die "got unknown phase '$phase'\n";
}
exit(0);