Provision VM from template using ansible.

dsexton18

Member
Jul 3, 2023
41
1
8
I am in the process of standing up a new proxmox 8 server. Anyone have an ansible playbook they wouldn't mind sharing those provisions a VM from a template?
 
I found this on the ansible docs page. But no luck I get this error. id 102 is a template on pve1.

"msg": "Cloning lxc VM 200 failed with exception: 500 Internal Server Error: Configuration file 'nodes/pve1/lxc/102.conf' does not exist"


Code:
---
- hosts: all
  tasks:
    - name: Create a full clone
      community.general.proxmox:
        vmid: 200
        node: pve1
        api_user: root@pam
        api_password: password
        api_host: ip_here
        clone: 102
        hostname: test
        storage: local
 
This seam to work for now. More to come.

Code:
---
- hosts: all
  tasks:
    - proxmox_kvm:
        api_user: root@pam
        api_password: password_here
        api_host: pve1
        clone: arbitrary_name
        vmid: 102
        newid: 152
        full: yes
        name: zavala  # The target VM name
        node: pve1
        storage: vm_data
        format: qcow2

    - proxmox_kvm:
        api_user: root@pam
        api_password: password_here
        api_host: pve1
        vmid: 152
        state: started
 
So, this thread made me register an account to share.:)

Went through this exercise and expanded on the idea a bit. Two playbooks here. They aren't pretty, but it works for me.

I used this project to start learning Ansible, so if it looks awful, you know why. I have lots of comments in the playbooks to help people understand what madness I created.


  • prod_proxmox_create_VM_Rocky_Template_playbook
    • This one clones VMs from a template as you requested.
    • Key features:
      • Supports creating a VM on the node with the template on it.
      • Allows you to migrate the VM to any node you specify in the cluster after creation.
      • Allows you to specify the vlan you want the vm to be on.
      • Provides you the mac address and IP of the VM after creation, if you like setting static DHCP entries for your vms in whatever firewall/router you use.
      • I was also going to auto register the static IP in PFSense using a module, but I got tired and never finished. That's why this is here.
      • Supports cloud-init configuration so that you can do neat stuff like inject a default user and ssh key for each VM rather than rely on the initial cloud-init configuration of the template.
      • The script is designed to be more of a "setup wizard" as it allows for you to make changes to the deployment on the fly with prompts.
      • EDIT: You should know this took me two Ansible collections to build
        1. To modify the NIC of the VM I used
          1. https://docs.ansible.com/ansible/latest/collections/community/general/proxmox_nic_module.html
        2. The rest of the config actions
          1. https://docs.ansible.com/ansible/latest/collections/community/general/proxmox_kvm_module.html
  • prod_proxmox_cloudinit_vm_playbook
    • This one allows you to download a cloud init image and auto create a brand new base template from a fresh image.
      • Key features:
        • Requires the user to manually download a cloud-init image and place it in a directory.
          • I could just shove a command to pull latest image from remote source with a quick hack like a wget, but never bothered.
        • Does all the messy stuff such as import the disk, attach it to the template, making the imported image bootable, etc.
          • Could be better, but it works.
Let me know what you think.;)

Notes: My cluster is kinda overkill for a home lab, therefore if you are looking for references to storage locations when needed, know that I have a separate flash NAS I built for this. There is no reference to local storage on a node in here( other than the quick hack I did for the template creation playbook). You may need to know this when making changes for your enviroment.

prod_proxmox_create_VM_Rocky_Template_playbook

YAML:
---
- name: Build_VMs_with_Ansibles
  vars:
    vm_name: **SNIP**
    proxmox_vm_id: **SNIP**
    # - proxmox_vlan_id: 'tag=9' Removed. part of old implementation of vlan switching using qm commands
    # - proxmox_vlan_id: '946' removed and added to vars prompt
    proxmox_user: root@pam
    proxmox_pass: **SNIP**
    proxmox_api_target: pve0.localdomain
    proxmox_deploy_target: pve0
    proxmox_clone_target: rocky-cloud
  hosts: pve0.localdomain
  vars_prompt:
    - name: node_migration
      prompt: "Do you want to migrate this host (yes/no)?"
      private: false
    - name: node_migration_additional
      prompt: "where do you want to migrate this host (pve1/pve2)?"
      private: false
    - name: proxmox_vlan_id
      prompt: "What Vlan? Enter Number only:(0-Core_Infra/2- Bastion_Security/3-Workstations/5-IOT/6-outofbandmgmt/8-Storage/99-DMZ/9-Secondary_Services)"
      private: false
  tasks:
  # Create new VM by cloning
    - name: Create new VM from clone specific VMID
      community.general.proxmox_kvm:
        node: "{{ proxmox_deploy_target }}"
        api_user: "{{ proxmox_user }}"
        api_password: "{{ proxmox_pass }}"
        api_host: "{{ proxmox_api_target }}"
        name: "{{ vm_name }}"
        clone: rocky-cloud
        newid: "{{ proxmox_vm_id }}"
        state: present
    # Update clone after creation
    - name: Update cloned VM with new Cloud-init
      community.general.proxmox_kvm:
        node: "{{ proxmox_deploy_target }}"
        api_user: "{{ proxmox_user }}"
        api_password: "{{ proxmox_pass }}"
        api_host: "{{ proxmox_api_target }}"
        name: "{{ vm_name }}"
        agent: enabled=1
        ciuser: **SNIP**
        cipassword: **SNIP**
        sshkeys: "{{  lookup('ansible.builtin.file', '/home/linuxdev/.ssh/id_rsa.pub') }}"
        searchdomains: 'localdomain'
        nameservers: **SNIP**
        update: true
    # We chill here for a few seconds so that the vm has time to get a VMID. It breaks otherwise.
    - name: Sleep for 5 seconds to wait for VMID:Then continue with play
      ansible.builtin.wait_for:
        timeout: 5
      delegate_to: localhost
    # We impliment a better vlan asignment procedure using defined modules instead of hacking it togeather
    # The when conditional below depends on toplogy. In my case, all core infra (e.g. Proxmox) is in an untagged interface therfore 0 is a placeholder.
    # I do not use vlan 0 or 1 in my lab
    # If I release this, the user should either remove the when conditional check below and allow any vlan number above in the vars_prompt or change it.
    - name: Vlan Assignment
      when: proxmox_vlan_id != '0'
      community.general.proxmox_nic:
        api_user: "{{ proxmox_user }}"
        api_password: "{{ proxmox_pass }}"
        api_host: "{{ proxmox_api_target }}"
        vmid: "{{ proxmox_vm_id }}"
        interface: net0
        bridge: vmbr0
        tag: "{{ proxmox_vlan_id }}"
      register: vlanasign_result
    # We do a little hack so we can move the vm to the vlan
   # - name: VLAN Assignment
   #   ansible.builtin.shell: # noqa: command-instead-of-shell
   #     cmd: qm set "{{ proxmox_vm_id }}" -net0 "virtio,bridge=vmbr0,"{{ proxmox_vlan_id }}""
   #   register: vlanasign_result
   #   changed_when: "'update' in vlanasign_result.stdout"
# Show shell command output for move the vm to the vlan error check
    - name: Debug move the vm to the vlan output
      ansible.builtin.debug:
        var: vlanasign_result
# Grab Mac address of VM prior to migration
    - name: Grab Mac address to prep for static dhcp asignment
      ansible.builtin.shell: # noqa: command-instead-of-shell risky-shell-pipe
        cmd: qm config "{{ proxmox_vm_id }}" | grep -o -E '([[:xdigit:]]{1,2}:){5}[[:xdigit:]]{1,2}'
      register: vm_mac_address
      changed_when: true
# Show shell command output for vm mac address pull
    - name: Debug Grab Mac address to prep for static dhcp asignment
      ansible.builtin.debug:
        var: vm_mac_address.stdout
# Do node migration if asked
    - name: Include node migration if you answered 'yes'
      when: node_migration == 'yes'
      ansible.builtin.shell: # noqa: command-instead-of-shell
        cmd: qm migrate "{{ proxmox_vm_id }}" "{{ node_migration_additional }}"
      register: nodemigration_result
      changed_when: "'successfully' in nodemigration_result.stdout"
# Show shell command output for vm node migration
    - name: Debug vm node migration output
      ansible.builtin.debug:
        var: nodemigration_result
# We chill here for a few seconds so that the vm has time to migrate.
    - name: Sleep for 20 seconds to wait for migration:Then continue with play
      ansible.builtin.wait_for:
        timeout: 20
      delegate_to: localhost
    # Start the VM
    - name: Start VM
      community.general.proxmox_kvm:
        api_user: "{{ proxmox_user }}"
        api_password: "{{ proxmox_pass }}"
        api_host: "{{ proxmox_api_target }}"
        name: "{{ vm_name }}"
        node: "{{ proxmox_deploy_target }}"
        state: started
    - name: Get VM state
      community.general.proxmox_kvm:
        api_user: "{{ proxmox_user }}"
        api_password: "{{ proxmox_pass }}"
        api_host: "{{ proxmox_api_target }}"
        name: "{{ vm_name }}"
        node: "{{ proxmox_deploy_target }}"
        state: current
      register: state
    - name: Debug vm state
      ansible.builtin.debug:
        var: state


prod_proxmox_cloudinit_vm_playbook
YAML:
---
- name: Create VM Templates for Rocky Linux
  hosts: pve0.localdomain
  tasks:
  # Create VM and capture shell command output
    - name: Create new VM
      ansible.builtin.shell: # noqa: command-instead-of-shell
        cmd: qm create 999 --memory 2048 --core 4 --cpu host --name rocky-cloud --net0 virtio,bridge=vmbr0
      register: vmcreate_result
      changed_when: "'unable' not in vmcreate_result.stderr"
  # Show shell command output for vmcreate error check
    - name: Debug Create new VM shell output
      ansible.builtin.debug:
        var: vmcreate_result
  # Import Disk and capture shell command output
    - name: Import Disk
      ansible.builtin.shell: # noqa: command-instead-of-shell
        cmd: qm importdisk 999 /home/Rocky-9-GenericCloud-Base.latest.x86_64.qcow2 --format qcow2 Hyper-Flash-SAN
      register: importdisk_result
      changed_when: "'Successfully' in importdisk_result.stdout"
  # Show shell command output for importdisk error check
    - name: Debug Import Disk shell output
      ansible.builtin.debug:
        var: importdisk_result
  # Attach Imported Disk and capture shell command output
    - name: Attach Imported Disk # noqa: command-instead-of-shell
      ansible.builtin.shell:
        cmd: qm set 999 --scsihw virtio-scsi-single  --scsi0 Hyper-Flash-SAN:999/vm-999-disk-0.qcow2,iothread=1
      register: attachdisk_result
      changed_when: "'update' in attachdisk_result.stdout"
  # Show shell command output for importdisk error check
    - name: Debug Attach Imported Disk shell output
      ansible.builtin.debug:
        var: attachdisk_result
# Attach cloudinit Disk and capture shell command output
    - name: Add cloudinit disk # noqa: command-instead-of-shell
      ansible.builtin.shell:
        cmd: qm set 999 --ide2 Hyper-Flash-SAN:cloudinit
      register: attachcloudinit_result
      changed_when: "'update' in attachcloudinit_result.stdout"
  # Show shell command output for Attach cloudinit Disk error check
    - name: Debug Attach cloudinit Disk shell output
      ansible.builtin.debug:
        var: attachcloudinit_result
# Make cloudinit disk bootable and capture shell command output
    - name: Make cloudinit disk bootable # noqa: command-instead-of-shell
      ansible.builtin.shell:
        cmd: qm set 999 --boot c --bootdisk scsi0
      register: bootcloudinit_result
      changed_when: "'update' in bootcloudinit_result.stdout"
# Show shell command output for Make cloudinit disk bootable error check
    - name: Debug Make cloudinit disk bootable output
      ansible.builtin.debug:
        var: bootcloudinit_result
# Create template and capture shell command output
    - name: Create Template  # noqa: command-instead-of-shell
      ansible.builtin.shell:
        cmd: qm template 999
      register: templatecreate_result
      changed_when: true
# Show shell command output for Make cloudinit disk bootable error check
    - name: Debug Create Template output
      ansible.builtin.debug:
        var: templatecreate_result
 
Last edited:
Here is mine which does the job of both setting up cloud image and some hosts,
both can be run individually depending on tag. it does the job in a homelab which is what I use it for.

YAML:
---
- hosts: proxmox
  gather_facts: no
  vars:
    base_id: 8000
    base_name: debian12-cloud
    auth:
      user: "{{ lookup('hashi_vault', 'secret=secret/auth:user') }}"
      password: "{{ lookup('hashi_vault', 'secret=secret/auth:pass') }}"
      public_ssh_key: "{{ lookup('hashi_vault', 'secret=secret/auth:ssh_public_key') }}"
    cloud_image_url: https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2
    cloud_image_filename: debian-12-generic-amd64.qcow2
    vm_list:
      - id: 135
        name: yoshi
        disk_size: 10G
        net0: virtio,bridge=vmbr0,tag=105
        ipconfig0: 'ip=dhcp'
        cores: 4
      - id: 136
        name: bowser
        disk_size: 15G
        net0: virtio,bridge=vmbr0,tag=110
        ipconfig0: 'ip=dhcp'
        #ipconfig0: 'ip=192.168.1.10/24,gw=192.168.1.1'
        memory: 4096
      # Add more VMs to clone as needed
  tasks:
    - name: Check if base VM already exists
      ansible.builtin.shell: 'qm status {{ base_id }}'
      register: result
      ignore_errors: true

    - name: 'Setup Cloud-VM with image {{ cloud_image_filename }}'
      tags: install-cloud-image
      block:
        - name: Download cloud image
          ansible.builtin.get_url:
            url: '{{ cloud_image_url }}'
            dest: '/tmp/{{ cloud_image_filename }}'

        - name: Create public-ssh-key
          ansible.builtin.copy:
            content: '{{ auth.public_ssh_key }}'
            dest: '/tmp/public-ssh-key.pub'

        - name: Create VM
          ansible.builtin.shell: 'qm create {{ base_id }} --name {{ base_name }} --memory 2048 --core 2 --net0 virtio,bridge=vmbr0'

        - name: Import disk to VM
          ansible.builtin.shell: 'qm importdisk {{ base_id }} /tmp/{{ cloud_image_filename }} local-lvm'

        - name: Set VM hardware options
          ansible.builtin.shell:
            cmd: |
              qm set {{ base_id }} --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-{{ base_id }}-disk-0
              qm set {{ base_id }} --ide2 local-lvm:cloudinit
              qm set {{ base_id }} --boot c --bootdisk scsi0
              qm set {{ base_id }} --serial0 socket --vga serial0
              qm set {{ base_id }} --agent enabled=1
              qm set {{ base_id }} --sshkeys /tmp/public-ssh-key.pub
              qm set {{ base_id }} --ciuser {{ auth.user }}
              qm set {{ base_id }} --cipassword {{ auth.password }}

        - name: Remove public-ssh-key
          ansible.builtin.file:
            path: /tmp/public-ssh-key.pub
            state: absent

        - name: Set VM as template
          ansible.builtin.shell: 'qm template {{ base_id }}'
      when: result.rc != 0

    - name: Check if VM IDs in vm_list are already in use
      ansible.builtin.shell: 'qm status {{ item.id }}'
      register: result
      loop: '{{ vm_list }}'
      ignore_errors: true
      failed_when: result.rc == 0
      tags: clone-vm
    - name: Clone VM block
      when: result.rc != 0
      tags: clone-vm
      block:
        - name: Clone VM and resize disk
          ansible.builtin.shell:
            cmd: |
              qm clone {{ base_id }} {{ item.id }} --name {{ item.name }} --full
              qm resize {{ item.id }} scsi0 {{ item.disk_size }}
          loop: '{{ vm_list }}'
        - name: Set CPU configuration
          ansible.builtin.shell:
            cmd: |
              qm set {{ item.id }} --cores {{ item.cores }}
          loop: '{{ vm_list }}'
          when: item.cores is defined
        - name: Set Memory configuration
          ansible.builtin.shell:
            cmd: |
              qm set {{ item.id }} --memory {{ item.memory }}
          loop: '{{ vm_list }}'
          when: item.memory is defined
        - name: Set network settings
          ansible.builtin.shell: 'qm set {{ item.id }} --net0 {{ item.net0 }}'
          loop: '{{ vm_list }}'
          when: item.net0 is defined
        - name: Set IP configuration
          ansible.builtin.shell: 'qm set {{ item.id }} --ipconfig0 {{ item.ipconfig0 }}'
          loop: '{{ vm_list }}'
          when: item.ipconfig0 is defined
 
Last edited:
Code:
- name: "Create Virtual Machine"
  community.general.proxmox_kvm:
    api_host: APIHOST
    api_user: APIUSER
    api_token_id: TOKEN
    api_token_secret: SECRET
    name: NEWNAME
    node: NODE
    newid: 1234
    clone: TEMPLATE
    storage: STORAGE
    state: present

No matter what I try I always get "No VM with name NEWNAME found". When I run the above playbook without the
Code:
clone: TEMPLATE
part a new VM is created.

I am on proxmox 8.0.3 using ansible core 2.12.8.
 
Code:
- name: "Create Virtual Machine"
  community.general.proxmox_kvm:
    api_host: APIHOST
    api_user: APIUSER
    api_token_id: TOKEN
    api_token_secret: SECRET
    name: NEWNAME
    node: NODE
    newid: 1234
    clone: TEMPLATE
    storage: STORAGE
    state: present

No matter what I try I always get "No VM with name NEWNAME found". When I run the above playbook without the
Code:
clone: TEMPLATE
part a new VM is created.

I am on proxmox 8.0.3 using ansible core 2.12.8.

Script is working as intended.

If you remove the "clone" line, that is all the code you need to create a VM and the script will assume default configuration choices to run.

If you don't change the "clone:" line to a target VM/ template that exists, the script will fail. When you include "Clone", the script does not infer defaults and is looking to fill in said "defaults" with the information from the target clone VM.

You may want to refer to the spec of the ansible module you intend to use to see what the implications are regarding clone and default values
https://docs.ansible.com/ansible/latest/collections/community/general/proxmox_kvm_module.html
 
Last edited:
If you remove the "clone" line, that is all the code you need to create a VM and the script will assume default configuration choices to run.

I am aware of that, but I want to clone an existing template.
If you don't change the "clone:" line to a target VM/ template that exists, the script will fail. When you include "Clone", the script does not infer defaults and is looking to fill in said "defaults" with the information from the target clone VM.

The "clone:" line refers to an existing template but the module complains that there is no VM with that name. And I did read the spec of the module. In fact that's where I took the code from and just replaced it with values from our proxmox environment :cool:
 
I created an issue on github. For some reason I get different results when I run a short python script using the proxmoxer module on our node (here I see the list of VMs) compared to when I run a task in ansible using the proxmox_vm_info module (which does not see any VMs). This is related to the proxmox_kvm module not seeing the VMs on the node.
Would be great if someone had an idea what is going wrong here.
 
Hi @thesix , did you checked the logs IN the proxmox. Maybe the id you're using are not allowed to browse the API.
 
Hi!

When I run these two tasks
YAML:
- name: List VMs on Host
  community.general.proxmox_vm_info:
    api_host: "{{ inventory_hostname }}"
    api_user: "{{ proxmox_api_user }}"
    api_token_id: "{{ proxmox_api_token_id }}"
    api_token_secret: "{{ proxmox_api_token_secret }}"
    node: node

- name: Clone VM
  community.general.proxmox_kvm:
    api_host: "{{ inventory_hostname }}"
    api_user: "{{ proxmox_api_user }}"
    api_token_id: "{{ proxmox_api_token_id }}"
    api_token_secret: "{{ proxmox_api_token_secret }}"
    clone: 2900
    name: zavala
    node: node
    storage: local-vm-storage
    format: qcow2
    full: true

I get this result

Code:
PLAY [Playbook :: Create Virtual Machine on Proxmox] ****************************************************************************************************************************************

TASK [proxmox_vm : List VMs on Host] ********************************************************************************************************************************************************
ok: [node.example.com]

TASK [proxmox_vm : Clone VM] ****************************************************************************************************************************************************************
fatal: [node.example.com]: FAILED! => {"changed": false, "msg": "VM with name = 2900 does not exist in cluster"}

PLAY RECAP **********************************************************************************************************************************************************************************
node.example.com      : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

and this log on node.example.com in /var/log/pveproxy/access.log

Code:
IP - user [10/08/2023:09:57:05 +0200] "GET /api2/json/version HTTP/1.1" 200 72
IP - user [10/08/2023:09:57:05 +0200] "GET /api2/json/nodes HTTP/1.1" 200 352
IP - user [10/08/2023:09:57:05 +0200] "GET /api2/json/nodes/node/qemu HTTP/1.1" 200 467
IP - user [10/08/2023:09:57:05 +0200] "GET /api2/json/nodes/node/lxc HTTP/1.1" 200 11
IP - user [10/08/2023:09:57:06 +0200] "GET /api2/json/version HTTP/1.1" 200 72
IP - user [10/08/2023:09:57:06 +0200] "GET /api2/json/cluster/resources?type=vm HTTP/1.1" 200 543
IP - user [10/08/2023:09:57:06 +0200] "GET /api2/json/cluster/nextid HTTP/1.1" 200 14

When I list the VMs using pvesh I get this:

Code:
root@node:~# pvesh ls /nodes/node/qemu
Dr--d        1600
Dr--d        1666
Dr--d        1670
Dr--d        2000
Dr--d        2700
Dr--d        2900
Dr--d        2910
 
Last edited:
So, this thread made me register an account to share.:)

Went through this exercise and expanded on the idea a bit. Two playbooks here. They aren't pretty, but it works for me.

I used this project to start learning Ansible, so if it looks awful, you know why. I have lots of comments in the playbooks to help people understand what madness I created.


  • prod_proxmox_create_VM_Rocky_Template_playbook
    • This one clones VMs from a template as you requested.
    • Key features:
      • Supports creating a VM on the node with the template on it.
      • Allows you to migrate the VM to any node you specify in the cluster after creation.
      • Allows you to specify the vlan you want the vm to be on.
      • Provides you the mac address and IP of the VM after creation, if you like setting static DHCP entries for your vms in whatever firewall/router you use.
      • I was also going to auto register the static IP in PFSense using a module, but I got tired and never finished. That's why this is here.
      • Supports cloud-init configuration so that you can do neat stuff like inject a default user and ssh key for each VM rather than rely on the initial cloud-init configuration of the template.
      • The script is designed to be more of a "setup wizard" as it allows for you to make changes to the deployment on the fly with prompts.
      • EDIT: You should know this took me two Ansible collections to build
        1. To modify the NIC of the VM I used
          1. https://docs.ansible.com/ansible/latest/collections/community/general/proxmox_nic_module.html
        2. The rest of the config actions
          1. https://docs.ansible.com/ansible/latest/collections/community/general/proxmox_kvm_module.html
  • prod_proxmox_cloudinit_vm_playbook
    • This one allows you to download a cloud init image and auto create a brand new base template from a fresh image.
      • Key features:
        • Requires the user to manually download a cloud-init image and place it in a directory.
          • I could just shove a command to pull latest image from remote source with a quick hack like a wget, but never bothered.
        • Does all the messy stuff such as import the disk, attach it to the template, making the imported image bootable, etc.
          • Could be better, but it works.
Let me know what you think.;)

Notes: My cluster is kinda overkill for a home lab, therefore if you are looking for references to storage locations when needed, know that I have a separate flash NAS I built for this. There is no reference to local storage on a node in here( other than the quick hack I did for the template creation playbook). You may need to know this when making changes for your enviroment.

prod_proxmox_create_VM_Rocky_Template_playbook

YAML:
---
- name: Build_VMs_with_Ansibles
  vars:
    vm_name: **SNIP**
    proxmox_vm_id: **SNIP**
    # - proxmox_vlan_id: 'tag=9' Removed. part of old implementation of vlan switching using qm commands
    # - proxmox_vlan_id: '946' removed and added to vars prompt
    proxmox_user: root@pam
    proxmox_pass: **SNIP**
    proxmox_api_target: pve0.localdomain
    proxmox_deploy_target: pve0
    proxmox_clone_target: rocky-cloud
  hosts: pve0.localdomain
  vars_prompt:
    - name: node_migration
      prompt: "Do you want to migrate this host (yes/no)?"
      private: false
    - name: node_migration_additional
      prompt: "where do you want to migrate this host (pve1/pve2)?"
      private: false
    - name: proxmox_vlan_id
      prompt: "What Vlan? Enter Number only:(0-Core_Infra/2- Bastion_Security/3-Workstations/5-IOT/6-outofbandmgmt/8-Storage/99-DMZ/9-Secondary_Services)"
      private: false
  tasks:
  # Create new VM by cloning
    - name: Create new VM from clone specific VMID
      community.general.proxmox_kvm:
        node: "{{ proxmox_deploy_target }}"
        api_user: "{{ proxmox_user }}"
        api_password: "{{ proxmox_pass }}"
        api_host: "{{ proxmox_api_target }}"
        name: "{{ vm_name }}"
        clone: rocky-cloud
        newid: "{{ proxmox_vm_id }}"
        state: present
    # Update clone after creation
    - name: Update cloned VM with new Cloud-init
      community.general.proxmox_kvm:
        node: "{{ proxmox_deploy_target }}"
        api_user: "{{ proxmox_user }}"
        api_password: "{{ proxmox_pass }}"
        api_host: "{{ proxmox_api_target }}"
        name: "{{ vm_name }}"
        agent: enabled=1
        ciuser: **SNIP**
        cipassword: **SNIP**
        sshkeys: "{{  lookup('ansible.builtin.file', '/home/linuxdev/.ssh/id_rsa.pub') }}"
        searchdomains: 'localdomain'
        nameservers: **SNIP**
        update: true
    # We chill here for a few seconds so that the vm has time to get a VMID. It breaks otherwise.
    - name: Sleep for 5 seconds to wait for VMID:Then continue with play
      ansible.builtin.wait_for:
        timeout: 5
      delegate_to: localhost
    # We impliment a better vlan asignment procedure using defined modules instead of hacking it togeather
    # The when conditional below depends on toplogy. In my case, all core infra (e.g. Proxmox) is in an untagged interface therfore 0 is a placeholder.
    # I do not use vlan 0 or 1 in my lab
    # If I release this, the user should either remove the when conditional check below and allow any vlan number above in the vars_prompt or change it.
    - name: Vlan Assignment
      when: proxmox_vlan_id != '0'
      community.general.proxmox_nic:
        api_user: "{{ proxmox_user }}"
        api_password: "{{ proxmox_pass }}"
        api_host: "{{ proxmox_api_target }}"
        vmid: "{{ proxmox_vm_id }}"
        interface: net0
        bridge: vmbr0
        tag: "{{ proxmox_vlan_id }}"
      register: vlanasign_result
    # We do a little hack so we can move the vm to the vlan
   # - name: VLAN Assignment
   #   ansible.builtin.shell: # noqa: command-instead-of-shell
   #     cmd: qm set "{{ proxmox_vm_id }}" -net0 "virtio,bridge=vmbr0,"{{ proxmox_vlan_id }}""
   #   register: vlanasign_result
   #   changed_when: "'update' in vlanasign_result.stdout"
# Show shell command output for move the vm to the vlan error check
    - name: Debug move the vm to the vlan output
      ansible.builtin.debug:
        var: vlanasign_result
# Grab Mac address of VM prior to migration
    - name: Grab Mac address to prep for static dhcp asignment
      ansible.builtin.shell: # noqa: command-instead-of-shell risky-shell-pipe
        cmd: qm config "{{ proxmox_vm_id }}" | grep -o -E '([[:xdigit:]]{1,2}:){5}[[:xdigit:]]{1,2}'
      register: vm_mac_address
      changed_when: true
# Show shell command output for vm mac address pull
    - name: Debug Grab Mac address to prep for static dhcp asignment
      ansible.builtin.debug:
        var: vm_mac_address.stdout
# Do node migration if asked
    - name: Include node migration if you answered 'yes'
      when: node_migration == 'yes'
      ansible.builtin.shell: # noqa: command-instead-of-shell
        cmd: qm migrate "{{ proxmox_vm_id }}" "{{ node_migration_additional }}"
      register: nodemigration_result
      changed_when: "'successfully' in nodemigration_result.stdout"
# Show shell command output for vm node migration
    - name: Debug vm node migration output
      ansible.builtin.debug:
        var: nodemigration_result
# We chill here for a few seconds so that the vm has time to migrate.
    - name: Sleep for 20 seconds to wait for migration:Then continue with play
      ansible.builtin.wait_for:
        timeout: 20
      delegate_to: localhost
    # Start the VM
    - name: Start VM
      community.general.proxmox_kvm:
        api_user: "{{ proxmox_user }}"
        api_password: "{{ proxmox_pass }}"
        api_host: "{{ proxmox_api_target }}"
        name: "{{ vm_name }}"
        node: "{{ proxmox_deploy_target }}"
        state: started
    - name: Get VM state
      community.general.proxmox_kvm:
        api_user: "{{ proxmox_user }}"
        api_password: "{{ proxmox_pass }}"
        api_host: "{{ proxmox_api_target }}"
        name: "{{ vm_name }}"
        node: "{{ proxmox_deploy_target }}"
        state: current
      register: state
    - name: Debug vm state
      ansible.builtin.debug:
        var: state


prod_proxmox_cloudinit_vm_playbook
YAML:
---
- name: Create VM Templates for Rocky Linux
  hosts: pve0.localdomain
  tasks:
  # Create VM and capture shell command output
    - name: Create new VM
      ansible.builtin.shell: # noqa: command-instead-of-shell
        cmd: qm create 999 --memory 2048 --core 4 --cpu host --name rocky-cloud --net0 virtio,bridge=vmbr0
      register: vmcreate_result
      changed_when: "'unable' not in vmcreate_result.stderr"
  # Show shell command output for vmcreate error check
    - name: Debug Create new VM shell output
      ansible.builtin.debug:
        var: vmcreate_result
  # Import Disk and capture shell command output
    - name: Import Disk
      ansible.builtin.shell: # noqa: command-instead-of-shell
        cmd: qm importdisk 999 /home/Rocky-9-GenericCloud-Base.latest.x86_64.qcow2 --format qcow2 Hyper-Flash-SAN
      register: importdisk_result
      changed_when: "'Successfully' in importdisk_result.stdout"
  # Show shell command output for importdisk error check
    - name: Debug Import Disk shell output
      ansible.builtin.debug:
        var: importdisk_result
  # Attach Imported Disk and capture shell command output
    - name: Attach Imported Disk # noqa: command-instead-of-shell
      ansible.builtin.shell:
        cmd: qm set 999 --scsihw virtio-scsi-single  --scsi0 Hyper-Flash-SAN:999/vm-999-disk-0.qcow2,iothread=1
      register: attachdisk_result
      changed_when: "'update' in attachdisk_result.stdout"
  # Show shell command output for importdisk error check
    - name: Debug Attach Imported Disk shell output
      ansible.builtin.debug:
        var: attachdisk_result
# Attach cloudinit Disk and capture shell command output
    - name: Add cloudinit disk # noqa: command-instead-of-shell
      ansible.builtin.shell:
        cmd: qm set 999 --ide2 Hyper-Flash-SAN:cloudinit
      register: attachcloudinit_result
      changed_when: "'update' in attachcloudinit_result.stdout"
  # Show shell command output for Attach cloudinit Disk error check
    - name: Debug Attach cloudinit Disk shell output
      ansible.builtin.debug:
        var: attachcloudinit_result
# Make cloudinit disk bootable and capture shell command output
    - name: Make cloudinit disk bootable # noqa: command-instead-of-shell
      ansible.builtin.shell:
        cmd: qm set 999 --boot c --bootdisk scsi0
      register: bootcloudinit_result
      changed_when: "'update' in bootcloudinit_result.stdout"
# Show shell command output for Make cloudinit disk bootable error check
    - name: Debug Make cloudinit disk bootable output
      ansible.builtin.debug:
        var: bootcloudinit_result
# Create template and capture shell command output
    - name: Create Template  # noqa: command-instead-of-shell
      ansible.builtin.shell:
        cmd: qm template 999
      register: templatecreate_result
      changed_when: true
# Show shell command output for Make cloudinit disk bootable error check
    - name: Debug Create Template output
      ansible.builtin.debug:
        var: templatecreate_result
Sorry for the late reply thank you so much for the playbook works like a champ. I had to make a few changes and install python3-proxmoxer on my proxmox server. Over all works like champ. I just need to put passwords in vault or prompt for them.
 
Hi!

When I run these two tasks
YAML:
- name: List VMs on Host
  community.general.proxmox_vm_info:
    api_host: "{{ inventory_hostname }}"
    api_user: "{{ proxmox_api_user }}"
    api_token_id: "{{ proxmox_api_token_id }}"
    api_token_secret: "{{ proxmox_api_token_secret }}"
    node: node

- name: Clone VM
  community.general.proxmox_kvm:
    api_host: "{{ inventory_hostname }}"
    api_user: "{{ proxmox_api_user }}"
    api_token_id: "{{ proxmox_api_token_id }}"
    api_token_secret: "{{ proxmox_api_token_secret }}"
    clone: 2900
    name: zavala
    node: node
    storage: local-vm-storage
    format: qcow2
    full: true

I get this result

Code:
PLAY [Playbook :: Create Virtual Machine on Proxmox] ****************************************************************************************************************************************

TASK [proxmox_vm : List VMs on Host] ********************************************************************************************************************************************************
ok: [node.example.com]

TASK [proxmox_vm : Clone VM] ****************************************************************************************************************************************************************
fatal: [node.example.com]: FAILED! => {"changed": false, "msg": "VM with name = 2900 does not exist in cluster"}

PLAY RECAP **********************************************************************************************************************************************************************************
node.example.com      : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

and this log on node.example.com in /var/log/pveproxy/access.log

Code:
IP - user [10/08/2023:09:57:05 +0200] "GET /api2/json/version HTTP/1.1" 200 72
IP - user [10/08/2023:09:57:05 +0200] "GET /api2/json/nodes HTTP/1.1" 200 352
IP - user [10/08/2023:09:57:05 +0200] "GET /api2/json/nodes/node/qemu HTTP/1.1" 200 467
IP - user [10/08/2023:09:57:05 +0200] "GET /api2/json/nodes/node/lxc HTTP/1.1" 200 11
IP - user [10/08/2023:09:57:06 +0200] "GET /api2/json/version HTTP/1.1" 200 72
IP - user [10/08/2023:09:57:06 +0200] "GET /api2/json/cluster/resources?type=vm HTTP/1.1" 200 543
IP - user [10/08/2023:09:57:06 +0200] "GET /api2/json/cluster/nextid HTTP/1.1" 200 14

When I list the VMs using pvesh I get this:

Code:
root@node:~# pvesh ls /nodes/node/qemu
Dr--d        1600
Dr--d        1666
Dr--d        1670
Dr--d        2000
Dr--d        2700
Dr--d        2900
Dr--d        2910

This is resolved. Could be that I had name and vid confused ...
 
So its 2024...
None of the above code seems to be working..

I'm looking for some what simple and "dumb" code that creates an VM from a template....
And btw how does ansible plaubooks work with vmids? do i have to edit the playbook everytime i run it with a new vmid?
 
In terms of why someone may be having issues with the Ansible based scripts,

- By their nature, any cmd blocks with more than one command can lose status errors as Ansible will only record the status returned by the last command in the script.

So the following will not show an error caused by the first line

Code:
          ansible.builtin.shell:
            cmd: |
              qm clone {{ base_id }} {{ item.id }} --name {{ item.name }} --full
              qm resize {{ item.id }} scsi0 {{ item.disk_size }}

This can be resolved by adding 'set -e' to the list of commands as that should cause the shell to return if any command reports an error, so

Code:
          ansible.builtin.shell:
            cmd: |
              set -e
              qm clone {{ base_id }} {{ item.id }} --name {{ item.name }} --full
              qm resize {{ item.id }} scsi0 {{ item.disk_size }}

A larger issue can be seen with the following ansible block

Code:
        - name: Set Disk size if defined
          ansible.builtin.shell: 'qm resize {{ vm.id }} virtio1 {{ vm.disk_size }}'
          when: vm.disk_size is defined
          register: outcome

This returns the following output in the variable outcome

Code:
{
    "msg": {
        "changed": true,
        "cmd": "qm resize 28000 virtio1 10G",
        "delta": "0:00:00.688400",
        "end": "2024-07-20 22:47:16.586228",
        "failed": false,
        "msg": "",
        "rc": 0,
        "start": "2024-07-20 22:47:15.897828",
        "stderr": "disk 'virtio1' does not exist",
        "stderr_lines": [
            "disk 'virtio1' does not exist"
        ],
        "stdout": "",
        "stdout_lines": []
    }
}

So an error is written to stderr (as well as the Proxmox server's task log), but rc is set to zero, so Ansible does not notice that an error has happened. This makes writing scripts using qm a little more complicated......
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!