Unable to install AlmaLinux 8.5, CentOS Stream 8, or Rocky Linux 8.5 on VM

gcm

New Member
Apr 7, 2022
5
0
1
Hello Everyone,

Debian and Ubuntu work fine. However, I wanted to inquire if anyone found a solution to install AlmaLinux 8.5, CentOS Stream 8, or Rocky Linux 8.5 on a VM? There seem to be several threads with this issue, but no resolution.

The common problem appears to be qemu guest agent failing to start.

Thank you in advance for your help!

AlmaLinux 8.5 = PXE Boot Loop
CentOS STream 8 & Rocky Linux 8.5 = Booted, but never receives the cloud-Init configuration because of qemu guest not running despite it being enabled for the VM.

Code:
Process: 779 ExecStart=/usr/bin/qemu-ga --method=virtio-serial --path=/dev/virtio-ports/org.qemu.guest_agent.0 --blacklist=${BLACKLIST_RPC} -F${FSFREEZE_HOOK_PATHNAME} (code=exited, status=203/EXEC)
emu-guest-agent.service: Start request repeated too quickly.
qemu-guest-agent.service: Failed with result 'exit-code'.
Failed to start QEMU Guest Agent.

Code:
agent: 1
boot: c
bootdisk: scsi0
cipassword: **********
ciuser: root
cores: 1
cpu: host
cpulimit: 0.95
cpuunits: 1024
description: <redacted>
ide2: local-zfs:vm-923-cloudinit,media=cdrom
ipconfig0: ip=<redacted>,gw=<redacted>
memory: 2048
meta: creation-qemu=6.1.1,ctime=1649877295
name: centos-test.com
net0: virtio=da:68:8f:6e:3b:10,bridge=vmbr0,firewall=1,rate=500
onboot: 1
scsi0: local-zfs:vm-923-disk-0,discard=on,size=20G
scsihw: virtio-scsi-pci
serial0: socket
smbios1: <redacted>
sockets: 1
vga: serial0
vmgenid: <redacted>


Code:
proxmox-ve: 7.1-1 (running kernel: 5.13.19-6-pve)
pve-manager: 7.1-12 (running version: 7.1-12/b3c09de3)
pve-kernel-helper: 7.1-14
pve-kernel-5.13: 7.1-9
pve-kernel-5.11: 7.0-10
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-4-pve: 5.11.22-9
ceph-fuse: 15.2.14-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-7
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-5
libpve-guest-common-perl: 4.1-1
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.1-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-2
proxmox-backup-client: 2.1.5-1
proxmox-backup-file-restore: 2.1.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-7
pve-cluster: 7.1-3
pve-container: 4.1-4
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-6
pve-ha-manager: 3.3-3
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.1-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1
 
You dont need qemu agent for cloudinit. You do need to install cloudinit package if you did a manual install and not from Cloud Image.
You should troubleshoot the two systems separately. For QGA - make sure you installed the package and try to start it manually. Obviously "Failed with result 'exit-code'." is not very helpful. See if you can squeeze a more informative error out of it.

If you feel that QGA is somehow impeding CloudInit, disable QGA and work on your cloudinit first.

Good luck


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Our cloud-init configuration works fine for Debian, Fedora, and Ubuntu. This is not the issue.

Regarding the images, these are all cloud images and and I am also forcing cloud-init,nano,qemu-guest-agent,wget just in case as well when creating the templates.

Issue appears to be strictly limited to forks of CentOS. There are many posts here with the same issue.
 
Issue appears to be strictly limited to forks of CentOS. There are many posts here with the same issue.
I haven't tried anything based of CentOS 8, but why are you trying to use it? I don't see the point in CentOS 8 anymore, if there are other "real" RHEL8 clones out there.
 
You must be doing something wrong then, because it works right out of the box for me...

qm create 105 --memory 512 --name vm105 --socket 1 --onboot no
qm importdisk 105 /mnt/pve/bbnas/template/iso/Rocky-8-GenericCloud-8.5-20211114.2.x86_64.qcow blockbridge --format raw
<additional configuration commands>

Code:
root@pve7demo1:~# qm config 105
agent: 1
boot: c
bootdisk: scsi0
ide2: blockbridge:vm-105-cloudinit,media=cdrom
ipconfig0: ip=dhcp
memory: 512
meta: creation-qemu=6.1.1,ctime=1649883229
name: vm105
net0: virtio=CA:17:B3:00:D4:FD,bridge=vmbr0,firewall=1
onboot: 0
scsi0: blockbridge:vm-105-disk-0,size=10G
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=e1e18220-9813-4e72-a68d-008f96bcb62e
sockets: 1
sshkeys: ssh-rsa%20AAAAB3NzaC1yc2..........................................
vga: serial0
vmgenid: d3652c22-d0a4-4e23-94a0-ebaf02d12a78

Code:
root@pve7demo1:~# qm agent 105 ping
root@pve7demo1:~# qm agent 105 info
{
   "supported_commands" : [
      {
         "enabled" : true,
         "name" : "guest-ssh-remove-authorized-keys",
         "success-response" : true

Code:
ssh rocky@172.16.101.112
Activate the web console with: systemctl enable --now cockpit.socket

[rocky@localhost ~]$ head /var/log/cloud-init.log
2022-04-13 20:55:02,493 - util.py[DEBUG]: Cloud-init v. 21.1-7.el8 running 'init-local' at Wed, 13 Apr 2022 20:55:02 +0000. Up 7.14 seconds.
2022-04-13 20:55:02,493 - main.py[DEBUG]: No kernel command line url found.
2022-04-13 20:55:02,493 - main.py[DEBUG]: Closing stdin.



Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
I haven't tried anything based of CentOS 8, but why are you trying to use it? I don't see the point in CentOS 8 anymore, if there are other "real" RHEL8 clones out there.
Was a typo on my part. Forks of RHEL not CentOS which Alma and Rocky are.
You must be doing something wrong then, because it works right out of the box for me...

qm create 105 --memory 512 --name vm105 --socket 1 --onboot no
qm importdisk 105 /mnt/pve/bbnas/template/iso/Rocky-8-GenericCloud-8.5-20211114.2.x86_64.qcow blockbridge --format raw
<additional configuration commands>

Code:
root@pve7demo1:~# qm config 105
agent: 1
boot: c
bootdisk: scsi0
ide2: blockbridge:vm-105-cloudinit,media=cdrom
ipconfig0: ip=dhcp
memory: 512
meta: creation-qemu=6.1.1,ctime=1649883229
name: vm105
net0: virtio=CA:17:B3:00:D4:FD,bridge=vmbr0,firewall=1
onboot: 0
scsi0: blockbridge:vm-105-disk-0,size=10G
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=e1e18220-9813-4e72-a68d-008f96bcb62e
sockets: 1
sshkeys: ssh-rsa%20AAAAB3NzaC1yc2..........................................
vga: serial0
vmgenid: d3652c22-d0a4-4e23-94a0-ebaf02d12a78

Code:
root@pve7demo1:~# qm agent 105 ping
root@pve7demo1:~# qm agent 105 info
{
   "supported_commands" : [
      {
         "enabled" : true,
         "name" : "guest-ssh-remove-authorized-keys",
         "success-response" : true

Code:
ssh rocky@172.16.101.112
Activate the web console with: systemctl enable --now cockpit.socket

[rocky@localhost ~]$ head /var/log/cloud-init.log
2022-04-13 20:55:02,493 - util.py[DEBUG]: Cloud-init v. 21.1-7.el8 running 'init-local' at Wed, 13 Apr 2022 20:55:02 +0000. Up 7.14 seconds.
2022-04-13 20:55:02,493 - main.py[DEBUG]: No kernel command line url found.
2022-04-13 20:55:02,493 - main.py[DEBUG]: Closing stdin.



Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

The VM works out of the box for me as well. However, passing cloud-init variables like ipconfig0: ip=<redacted>,gw=<redacted> and password changes don't work whereas it works fine with Debian.
 
The VM works out of the box for me as well. However, passing cloud-init variables like ipconfig0: ip=<redacted>,gw=<redacted> and password changes don't work whereas it works fine with Debian.
That was not in your original post and is a very different statement.

I tested portion of your claim:
Code:
root@pve7demo1:~# qm set 105 --cipassword password
update VM 105: -cipassword <hidden>

root@pve7demo1:~# qm config 105
agent: 1
boot: c
bootdisk: scsi0
cipassword: **********
ide2: blockbridge:vm-105-cloudinit,media=cdrom

root@pve7demo1:~# qm start 105
generating cloud-init ISO

root@pve7demo1:~# ssh rocky@172.16.101.112
[rocky@localhost ~]$ sudo su
[root@localhost rocky]# mount /dev/cdrom /mnt/cdrom
[root@localhost rocky]# cd /mnt/cdrom
[root@localhost cdrom]# more user-data
#cloud-config
hostname: vm105
manage_etc_hosts: true
password: $5$x9rzVg6j$BLX7xvhti2c0aw8/Mv/7OfIIvjCL6jXEnbe4kbCSPv1
ssh_authorized_keys:
  - ssh-rsa ............
chpasswd:
  expire: False
users:
  - default
package_upgrade: true

[root@localhost cdrom]# egrep v1 /etc/shadow
rocky:$5$x9rzVg6j$BLX7xvhti2c0aw8/Mv/7OfIIvjCL6jXEnbe4kbCSPv1:19095:0:99999:7:::

Again, works out of the box... Perhaps you can work up a step by step create/start/verify procedure that shows what fails and what steps you took to troubleshoot it.


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
That was not in your original post and is a very different statement.

I tested portion of your claim:
Code:
root@pve7demo1:~# qm set 105 --cipassword password
update VM 105: -cipassword <hidden>

root@pve7demo1:~# qm config 105
agent: 1
boot: c
bootdisk: scsi0
cipassword: **********
ide2: blockbridge:vm-105-cloudinit,media=cdrom

root@pve7demo1:~# qm start 105
generating cloud-init ISO

root@pve7demo1:~# ssh rocky@172.16.101.112
[rocky@localhost ~]$ sudo su
[root@localhost rocky]# mount /dev/cdrom /mnt/cdrom
[root@localhost rocky]# cd /mnt/cdrom
[root@localhost cdrom]# more user-data
#cloud-config
hostname: vm105
manage_etc_hosts: true
password: $5$x9rzVg6j$BLX7xvhti2c0aw8/Mv/7OfIIvjCL6jXEnbe4kbCSPv1
ssh_authorized_keys:
  - ssh-rsa ............
chpasswd:
  expire: False
users:
  - default
package_upgrade: true

[root@localhost cdrom]# egrep v1 /etc/shadow
rocky:$5$x9rzVg6j$BLX7xvhti2c0aw8/Mv/7OfIIvjCL6jXEnbe4kbCSPv1:19095:0:99999:7:::

Again, works out of the box... Perhaps you can work up a step by step create/start/verify procedure that shows what fails and what steps you took to troubleshoot it.


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

Please double check what I wrote again. It is there.

"CentOS STream 8 & Rocky Linux 8.5 = Booted, but never receives the cloud-Init configuration because of qemu guest not running despite it being enabled for the VM."

Unfortunately, I cannot do a step by step since cloud-init is coming from a 3rd party software we use.

Thank you for your replies nonetheless.

cloud-init is coming
 
Please double check what I wrote again. It is there.

"CentOS STream 8 & Rocky Linux 8.5 = Booted, but never receives the cloud-Init configuration because of qemu guest not running despite it being enabled for the VM."
If you follow the output of my experiments done based on your input:
- I used Rocky Linux 8.5 cloud image
- Confirmed that QGA works without any additional configuration
- Confirmed that CloudInit generated by PVE works
- Confirmed that password change via PVE native CloudInit works

Good luck.


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
@bbgeek17

Could you please try with https://repo.almalinux.org/almalinu.../AlmaLinux-8-GenericCloud-latest.x86_64.qcow2 in your setup? I get stuck with a BIOS / PXE error here.

With Rocky / CentOS Stream the VM boots, but I cannot get cloud-init to work at all.
https://download.rockylinux.org/pub/rocky/8/images/Rocky-8-GenericCloud-8.5-20211114.2.x86_64.qcow2

Code:
sudo virt-customize -a Rocky-8-GenericCloud-8.5-20211114.2.x86_64.qcow2 --install cloud-init,nano,qemu-guest-agent,wget
virt-customize -a Rocky-8-GenericCloud-8.5-20211114.2.x86_64.qcow2 --root-password password:password

sudo qm create 103 --name "Rocky8.5" --memory 512 --cores 1 --net0 virtio,bridge=vmbr0
sudo qm importdisk 103 Rocky-8-GenericCloud-8.5-20211114.2.x86_64.qcow2 local-zfs --format raw
sudo qm set 103 --scsihw virtio-scsi-pci --scsi0 local-zfs:vm-103-disk-0
sudo qm set 103 --boot c --bootdisk scsi0
sudo qm set 103 --ide2 local-zfs:cloudinit
sudo qm set 103 --serial0 socket --vga serial0
sudo qm set 103 --agent enabled=1
 
Could you please try with https://repo.almalinux.org/almalinu.../AlmaLinux-8-GenericCloud-latest.x86_64.qcow2 in your setup? I get stuck with a BIOS / PXE error here.
Boots fine. I am using nested PVE inside ESX.

Code:
[    2.720235] systemd[1]: Detected virtualization kvm.
[    2.721188] systemd[1]: Detected architecture x86-64.
...
AlmaLinux 8.5 (Arctic Sphynx)
Kernel 4.18.0-348.20.1.el8_5.x86_64 on an x86_64

Activate the web console with: systemctl enable --now cockpit.socket

localhost login:
With Rocky / CentOS Stream the VM boots, but I cannot get cloud-init to work at all.
https://download.rockylinux.org/pub/rocky/8/images/Rocky-8-GenericCloud-8.5-20211114.2.x86_64.qcow2
works for me


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Here is the procedure I use to provision my VMs:
First I create a template:
Create the instance Template (Rocky Linux 8.5)

Code:
VMID=85003;STORAGE="local-lvm"

qm create ${VMID} \
-name rock85-cloudinit -ostype l26 \
-cores 1 -sockets 1 -cpu cputype=kvm64 -kvm 1 -numa 1 \
-memory 1024 \
-net0 virtio,bridge=vmbr0

# Import the OpenStack disk image to Proxmox storage
qm importdisk ${VMID} Rocky-8-GenericCloud-8.5-20211114.2.x86_64.qcow2 ${STORAGE}

qm set ${VMID} \
-scsihw virtio-scsi-pci -virtio0 ${STORAGE}:vm-${VMID}-disk-0 \
-ide2 ${STORAGE}:cloudinit -serial0 socket \
-boot c -bootdisk virtio0 -agent 1 \
-hotplug disk,network,usb,memory,cpu \
-vcpus 1 -vga qxl

qm template ${VMID}

Second, I instantiate the VM from the template:
I create a file "key.pub" in my home to access the VM with ssh key.

Code:
VMID=22027;HOSTNAME="Rocky";BRIDGE0="vmbr0";IP="172.3.255.27/28";GW="172.3.255.20";DISK="24"

qm clone 85003 ${VMID} -name ${HOSTNAME}
qm set ${VMID} \
-sockets 1 -cores 4 -vcpus 4 -memory 8192 -vga qxl -onboot 1 \
-net0 virtio,bridge=${BRIDGE0} \
-ipconfig0 ip=${IP},gw=${GW} \
-sshkey ~/key.pub
qm resize ${VMID} virtio0 +${DISK}G
qm start ${VMID}

Access VM

Code:
ssh -i "key.pem" -o StrictHostKeyChecking=no -o BatchMode=yes rocky@172.3.255.27