Creating cloud images using Ansible fails while the same commands work in the shell

May 22, 2020
17
2
23
54
I also asked in stackoverflow, but trying here.

I am updating my Ansible PVE deployment to create cloud image templates.
I have a working shell script, and I just converted the commands to an Ansible "shell" command, the results are weird, sometimes working, sometimes not, sometimes I end up with locked images.
I changed my playbook to use explicit commands, not shell, so that I have more granular control, and can troubleshoot better.

I now see that the error code returned by "qm" is 25 when it fails, with stderr "failed to tcsetpgrp: Inappropriate ioctl for device" and "got no worker upid - start worker failed".
I found one similar unresolved issue for packer.

I am at a loss as to why the same commands from a shell script or the console work, but when executed as by ansible they fail.

Working shell script when executed via SSH:
```shell
#!/bin/bash
set -e
set -x
VM_ID=$1
VM_NAME=$2
VM_TAGS=$3
VM_USER=$4
VM_PASSWORD=$5
VM_DOMAIN=$6
VM_SSH_KEYS=$7
VM_IMAGE=$8
qm unlock $VM_ID || true
qm destroy $VM_ID || true
qm create $VM_ID \
--name $VM_NAME \
--tags $VM_TAGS \
--memory 4096 \
--cores 2 \
--net0 virtio,bridge=vmbr1 \
--scsihw virtio-scsi-pci \
--ciuser $VM_USER \
--cipassword $VM_PASSWORD \
--searchdomain $VM_DOMAIN \
--sshkeys $VM_SSH_KEYS \
--ipconfig0 ip=dhcp \
--ostype l26 \
--agent 1
virt-customize -a $VM_IMAGE --install qemu-guest-agent
qm set $VM_ID --scsi0 vmdata:0,import-from=$VM_IMAGE --boot order=scsi0 --ide2 vmdata:cloudinit
qm resize $VM_ID scsi0 8G
qm template $VM_ID
```

Ansible playbook:
```yaml
---
# Create Cloud-Init VM templates
# https://pve.proxmox.com/wiki/Cloud-Init_Support
# Install libguestfs
# https://docs.ansible.com/ansible/latest/modules/apt_module.html
- name: "Install libguestfs"
apt:
name: ["libguestfs-tools"]
state: latest
update_cache: yes
cache_valid_time: 3600
# https://docs.ansible.com/ansible/latest/modules/file_module.html
- name: "Create Cloud Init directory"
file:
path: "{{ item }}"
state: directory
mode: "ugo+rwx"
owner: nobody
group: users
recurse: true
with_items:
- "{{ cloud_init_dir }}"
# https://docs.ansible.com/ansible/latest/collections/ansible/builtin/command_module.html
- name: "Rescan images"
ansible.builtin.command: "qm rescan"
# https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_blocks.html
- name: "Create Cloud-Init VM templates"
block:
# Download cloud images
# https://docs.ansible.com/ansible/latest/collections/ansible/builtin/get_url_module.html
- name: "Download cloud images"
# Re-create the images when the downloaded image changed
register: download
get_url:
url: "{{ item.url }}"
dest: "{{ item.file }}"
mode: "ugo+rwx"
owner: nobody
group: users
timeout: 120
with_items: "{{ images }}"
# Destroy the image if it exists
- name: "Get image config"
register: config
ansible.builtin.command: "qm config {{ item.id }}"
ignore_errors: true
with_items: "{{ images }}"
- name: "Destroy image"
ansible.builtin.command: "qm destroy {{ item.item.id }} --destroy-unreferenced-disks 1 --skiplock 1"
when: item.rc == 0 # && download.changed
with_items: "{{ config.results }}"
- name: "Create image"
# when: download.changed
ansible.builtin.command:
argv:
- "qm"
- "create"
- "{{ item.id }}"
- "--name"
- "{{ item.name }}"
- "--tags"
- "{{ item.tags }}"
- "--memory"
- "4096"
- "--cores"
- "2"
- "--net0"
- "virtio,bridge=vmbr1"
- "--scsihw"
- "virtio-scsi-pci"
- "--ciuser"
- "{{ cloud_init_user }}"
- "--cipassword"
- "{{ cloud_init_password }}"
- "--searchdomain"
- "{{ cloud_init_domain }}"
- "--sshkeys"
- "{{ cloud_init_ssh_keys }}"
- "--ipconfig0"
- "ip=dhcp"
- "--ostype"
- "l26"
- "--agent"
- "1"
with_items: "{{ images }}"
- name: "Customize disk image"
# when: download.changed
ansible.builtin.command: "virt-customize -a {{ item.image }} --install qemu-guest-agent"
with_items: "{{ images }}"
- name: "Add disk image"
# when: download.changed
ansible.builtin.command: "qm set {{ item.id }} --scsi0 vmdata:0,import-from={{ item.file }} --boot order=scsi0 --ide2 vmdata:cloudinit"
with_items: "{{ images }}"
- name: "Resize disk image"
# when: download.changed
ansible.builtin.command: "qm resize {{ item.id }} scsi0 8G"
with_items: "{{ images }}"
- name: "Convert to template"
# when: download.changed
ansible.builtin.command: "qm template {{ item.id }}"
with_items: "{{ images }}"
vars:
images:
- {
id: "9001",
name: "ubuntu-jammy-template",
tags: "ubuntu,jammy,cloud-image",
file: "{{ cloud_init_dir }}/jammy-server-cloudimg-amd64.img",
url: "https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img",
}
- {
id: "9002",
name: "debian-bookworm-template",
tags: "debian,bookworm,cloud-image",
file: "{{ cloud_init_dir }}/debian-12-genericcloud-amd64.qcow2",
url: "https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-genericcloud-amd64.qcow2",
}
```

Typical error output:
```console
TASK [Create image] **********************************************************************************************************************************************************************************************************************************************************************************
task path: /home/pieter/HomeAutomation/Ansible/tasks/cloud_init2.yml:58
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: pieter
<127.0.0.1> EXEC /bin/sh -c 'echo ~pieter && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/pieter/.ansible/tmp `"&& mkdir "` echo /home/pieter/.ansible/tmp/ansible-tmp-1690993800.1284304-2340153-201602947955957 `" && echo ansible-tmp-1690993800.1284304-2340153-201602947955957="` echo /home/pieter/.ansible/tmp/ansible-tmp-1690993800.1284304-2340153-201602947955957 `" ) && sleep 0'
Using module file /usr/local/lib/python3.9/dist-packages/ansible/modules/command.py
<127.0.0.1> PUT /home/pieter/.ansible/tmp/ansible-local-23388082xh39a_f/tmp_lxzp8zb TO /home/pieter/.ansible/tmp/ansible-tmp-1690993800.1284304-2340153-201602947955957/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/pieter/.ansible/tmp/ansible-tmp-1690993800.1284304-2340153-201602947955957/ /home/pieter/.ansible/tmp/ansible-tmp-1690993800.1284304-2340153-201602947955957/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -p "[sudo via ansible, key=qmgckiackgispkthflseiqoheeypuljb] password:" -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-qmgckiackgispkthflseiqoheeypuljb ; /usr/bin/python3 /home/pieter/.ansible/tmp/ansible-tmp-1690993800.1284304-2340153-201602947955957/AnsiballZ_command.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/pieter/.ansible/tmp/ansible-tmp-1690993800.1284304-2340153-201602947955957/ > /dev/null 2>&1 && sleep 0'
failed: [localhost] (item={'id': '9001', 'name': 'ubuntu-jammy-template', 'tags': 'ubuntu,jammy,cloud-image', 'file': '/data/install/images/jammy-server-cloudimg-amd64.img', 'url': 'https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img'}) => {
"ansible_loop_var": "item",
"changed": true,
"cmd": [
"qm",
"create",
"9001",
"--name",
"ubuntu-jammy-template",
"--tags",
"ubuntu,jammy,cloud-image",
"--memory",
"4096",
"--cores",
"2",
"--net0",
"virtio,bridge=vmbr1",
"--scsihw",
"virtio-scsi-pci",
"--ciuser",
"pieter",
"--cipassword",
"Password1",
"--searchdomain",
"home.insanegenius.net",
"--sshkeys",
"/home/pieter/.ssh/authorized_keys",
"--ipconfig0",
"ip=dhcp",
"--ostype",
"l26",
"--agent",
"1"
],
"delta": "0:00:01.008667",
"end": "2023-08-02 09:30:01.294613",
"invocation": {
"module_args": {
"_raw_params": null,
"_uses_shell": false,
"argv": [
"qm",
"create",
"9001",
"--name",
"ubuntu-jammy-template",
"--tags",
"ubuntu,jammy,cloud-image",
"--memory",
"4096",
"--cores",
"2",
"--net0",
"virtio,bridge=vmbr1",
"--scsihw",
"virtio-scsi-pci",
"--ciuser",
"pieter",
"--cipassword",
"Password1",
"--searchdomain",
"home.insanegenius.net",
"--sshkeys",
"/home/pieter/.ssh/authorized_keys",
"--ipconfig0",
"ip=dhcp",
"--ostype",
"l26",
"--agent",
"1"
],
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"item": {
"file": "/data/install/images/jammy-server-cloudimg-amd64.img",
"id": "9001",
"name": "ubuntu-jammy-template",
"tags": "ubuntu,jammy,cloud-image",
"url": "https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img"
},
"msg": "non-zero return code",
"rc": 25,
"start": "2023-08-02 09:30:00.285946",
"stderr": "failed to tcsetpgrp: Inappropriate ioctl for device\ngot no worker upid - start worker failed",
"stderr_lines": [
"failed to tcsetpgrp: Inappropriate ioctl for device",
"got no worker upid - start worker failed"
],
"stdout": "",
"stdout_lines": []
}
```

And after this "qm create" failure the images are created but in "lock: create" state.
```console
pieter@server-1:~$ sudo qm config 9001
[sudo] password for pieter:
lock: create
pieter@server-1:~$
```

I did not find any qm docs on error codes, what does error code 25 mean?
Why would the "qm create" command complete partly and leave the images in "lock: create" state?
The PVE example docs on cloud-init split the creation into multiple parts, I do it all in one shot, is that a possible problem?
Any ideas as to the "failed to tcsetpgrp: Inappropriate ioctl for device" and "got no worker upid - start worker failed" errors?
 
I really don't get why people want to press everything in ansible. Just run your shell script with ansible.
I did, as I mentioned in the stackoverflow post, same results, if I run the shell script over SSH console it works, if I run the shell script over Ansible script command it also weirdly fails with similar results. That is why I went the command route to refine the steps with e.g. conditional destroy logic only if image exists and to help troubleshoot just what step is failing.
 
With no solution in sight I upgraded to PVE 8.0.3, and no more error 25.
The upgrade did initially fail to install the kernel, I had to "dkms remove aufs/4.19+20190211", and after that all was well.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!