autoinstall: creator's host/nodename hardcoded in autoinstall-iso?

proxmix

New Member
Nov 23, 2023
19
4
3
Dear Proxmoxers,

I thought I could run some kind of pve-autoinstall-iso-creator-machine, that generates autoinstall-iso files for various PVEs, until I noticed:

Installing an auto-install-iso (generated on e.g. hostname "pve-iso-creator.example.com") on another machine its host/nodename is pve-iso-generator, too. In other words, the hostname of the machine the proxmox-auto-install-assistant has been run seems to be hardcoded into the resulting autoinstall-iso.

Question: Is there any easy way to set the hostname during iso-creation (eg. via the answers.toml) or afterwards during first boot (e.g. via dns reverse lookups) or something else?

If there are no such options - neither implemented, nor planned - what do you think, sound this workaround procedure promissing:

1. deploy e.g. a minimalistic debian lxc (e.g. bookworm netinstall)
2. add the pve-repositories
3. install proxmox-auto-install-assistant
4. rename the lxc's hostname into new-pve.example.com
5. generate an autoinstall-iso
6. download the autoinstall-iso
7. destroy the lxc
8. deploy the downloaded autoinstall-iso onto the new-pve.example.com

My main concerns are regarding step 4: Is it sufficient to change /etc/hosts and /etc/hostname? If not, what else does the proxmox-auto-install-assistant rely on, regarding the hostname an ISO sets?

Another idea could be using hostname assignment via dhcp, but I think the above steps are easier to automate.

Bonus question:
How do you [intent to] use the autoinstall-feature?

Thanks a lot for your feedback!

Best regards,
proxmix
 
Last edited:
thanks @aaron for your feedback!

In the meantime I use[d] this shell script as a workaround (DISCLAIMER: I've not tested it in clustered pve-environments):

Code:
#!/bin/bash
hostname_fqdn_cur=`hostname -f`
if [ -z "${hostname_fqdn_cur}" ]; then
        echo "Error: hostname empty (host seems to be already renamed, but not rebooted yet)"
        exit 1
fi

if [ -z "$1" ]; then
        echo "Error: missing parameter <hostname_new>"
        exit 1
fi
hostname_fqdn_new="$1"

if [ "${hostname_fqdn_cur}x" == "${hostname_fqdn_new}x" ]; then
        echo "unchanged"
else
        #systemctl stop pve-cluster
        hostname_cur=`echo ${hostname_fqdn_cur} | awk -F'.' '{print $1}'`
        echo "hostname_cur: ${hostname_cur}"
        domain_cur=`echo ${hostname_fqdn_cur} | sed -e 's/^[^.]*\.\(.*\)$/\1/g'`
        echo "domain_cur: ${domain_cur}"

        hostname_new=`echo ${hostname_fqdn_new} | awk -F'.' '{print $1}'`
        echo "hostname_new: ${hostname_new}"
        domain_new=`echo ${hostname_fqdn_new} | sed -e 's/^[^.]*\.\(.*\)$/\1/g'`
        echo "domain_new: ${domain_new}"

        # rename hostname ignore files found in /etc/pve
        for file in `grep -r "${hostname_cur}" /etc/* | grep -v "/etc/pve" | awk -F':' '{print $1}'`; do sed -i "s/${hostname_cur}/${hostname_new}/g" ${file}; done

        # rename domainname ignore files found in /etc/pve
        for file in `grep -r "${domain_cur}" /etc/* | grep -v "/etc/pve" | awk -F':' '{print $1}'`; do sed -i "s/${domain_cur}/${domain_new}/g" ${file}; done
        #systemctl start pve-cluster

        touch /var/run/reboot-required
fi
exit 0

Additionally I implemented an also rather ugly/quick workaround to replace the email address (specified during installation) and to get rid of ssh public keys (stored in /etc/pve/priv/authorized_keys, but not used/trusted any more). Here's the script regarding the ssh public keys (email adress replacement follows the same principle):

Code:
#!/bin/bash

db_file="/var/lib/pve-cluster/config.db"
tablename="tree"

if [ -z "$1" ]; then
        echo "Error: missing parameter <pubkey_comment>"
        exit 1
fi
pubkey_comment="$1"

authorized_keys_file="/etc/pve/priv/authorized_keys"
if cat "${authorized_keys_file}" | grep -q "${pubkey_comment}"; then
        echo "found"
else
        echo "key not found: \"${pubkey_comment}\" not in ${authorized_keys_file}"
        exit 1
fi

systemctl stop pve-cluster &&
authorized_keys_upd=`sqlite3 ${db_file} "SELECT data FROM ${tablename} WHERE name='authorized_keys'" | grep -v "${pubkey_comment}"` &&
sqlite3 ${db_file} << END_SQL &&
UPDATE ${tablename} SET data='${authorized_keys_upd}' WHERE name='authorized_keys'
END_SQL
systemctl start pve-cluster &&
touch /var/run/reboot-required &&
exit 0

echo "Something went wrong"
exit 1

Both are called from inside an ansible-playbook to help automating pve deployment.