Proxmox Booting into template created on Cluster after restart

ferlo

New Member
Mar 20, 2023
2
0
1
Hi everyone,

Not sure if it's the way I created the template(more explanation below) or if it's the recent network change I made to my Proxmox cluster that brought everything crashing.

A little bit about my setup, I have a server that runs the Proxmox VE with two 1GB ethernet ports. On the Proxmox VE, I have created two Linux Bridge network as so


Bash:
auto lo

iface lo inet loopback

iface eno1 inet manual

iface en02 inet manual

auto vmbr0
iface vmbr0 inet static

        address 192.168.10.2/24

        gateway 192.168.10.1

        bridge-ports eno1

        bridge-stp off

        bridge-fd 0


auto vmbr1
iface vmbr1 inet manual
        address 192.168.30.2/24

        gateway 192.168.30.1

        bridge-ports eno2

        bridge-stp off

        bridge-fd 0

        bridge-vlan-aware yes

        bridge-vids 1-4094
# IOT NETWORK

In short, I tried to create a bridge network on both of the two ethernet ports i have.

Recently, I started looking into using custom scripts to automate the process of creating VMS. Eventually i came up with this to create a template
Bash:
sudo qm create "$TEMPLATE_ID" --name "$TEMPLATE_NAME"  --memory 1024 --cores 2 --net0 virtio,bridge=vmbr0,firewall=1 --cpu host
sudo qm set "$TEMPLATE_ID" --agent 1
sudo qm set "$TEMPLATE_ID" --ide0 local-lvm:cloudinit
sudo qm set "$TEMPLATE_ID" --ide2 none,media=cdrom
sudo qm set "$TEMPLATE_ID" --cipassword password
sudo qm set "$TEMPLATE_ID" --ciuser hlv-admin
sudo qm set "$TEMPLATE_ID" --ipconfig0 'ip=dhcp'
sudo qm set "$TEMPLATE_ID" --scsihw virtio-scsi-single
sudo qm set "$TEMPLATE_ID" --serial0 socket --vga serial0
sudo qm set "$TEMPLATE_ID" --sshkeys  ~/.ssh/id_ed25519.pub

sudo virt-customize -a "$DISK_IMAGE" --install qemu-guest-agent
sudo virt-customize -a "$DISK_IMAGE" --run-command 'sudo echo localhost > /etc/hostname'
sudo virt-customize -a "$DISK_IMAGE" --run-command 'sudo cloud-init clean --logs'
sudo virt-customize -a "$DISK_IMAGE" --truncate /etc/machine-id

mv "$DISK_IMAGE"  ubuntu-22.04.qcow2
sudo qemu-img resize ubuntu-22.04.qcow2 32G

sudo qm importdisk "$TEMPLATE_ID" ubuntu-22.04.qcow2 local-lvm
sudo qm set "$TEMPLATE_ID" --scsi0 local-lvm:vm-"$TEMPLATE_ID"-disk-0,iothread=1,size=32G,ssd=1
sudo qm set "$TEMPLATE_ID" --boot c --bootdisk scsi0
sudo qm set "$TEMPLATE_ID" --boot order='ide2;scsi0;net0;ide0'
sudo qm template "$TEMPLATE_ID"

and this to create a vm

Bash:
sudo qm clone $TEMPLATE_VM_ID $NEW_VM_ID --name "$NEW_VM_NAME"
sudo qm resize $NEW_VM_ID scsi0 +${DISK_SPACE}G
sudo qm set $NEW_VM_ID --cores $CORES
sudo qm set $NEW_VM_ID --memory $MEMORY
sudo qm start "$VM_ID"


Everything was working fine until yesterday when I decided to create more Linux bridge networks on the physical. I did so by adding more blocks to the code I showed above. Then I decided to restart my Proxmox cluster to see if the networks I created are working fine and ever since I have been unable to access my Proxmox cluster. I am getting this screen
1706823955585.png


1706824024141.png


for some reason the server is booting into a vm created by the template i think. Can somone please help
 
The server is not booting into the template. Thats impossible.

You made a mistake at some point and installed cloud-init package on PVE, and perhaps placed cloud-init in "correct" place as well.
Start by uninstalling cloud-init from PVE. However, its unclear how much damage might have been done.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: ferlo
OMG, thank you so much for the swift response.
"You made a mistake at some point and install cloud-init package on PVE, and perhaps placed cloud-init in "correct" place as well.
This is correct, I remembered installing this for testing something in my PVE, I didn't know it would cause this issue.

Start by uninstalling cloud-init from PVE
the problem is that i can't even access the PVE both from the UI and from the server. Do you know how to go about this?
 
If you cant login via SSH (it may be password from one of your cloud-init experiments, or ssh key), and you cant login from console, then you can try to boot into recovery (watch menu options on physical console early in boot). Or you can boot into any Live ISO and try to fix your install through it. There are many guides on the net about manipulating context of a failed system via ISO boot.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: ferlo

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!