Configuring Proxmox server with Ansible

smcracraft

Member
Nov 27, 2019
31
0
6
124
California, USA
Hi,

I've created an account on the server but would like to ensure I open up all the necessary ports so that I can use Ansible to configure and ssh to access the console itself.

Not speaking of the individual VM's. That's not a problem. But the server.

Can someone refer me to the right document (or comment) about what all the ports and settings should be to enable the above?

Thanks much.

Stuart
 
ansible itself only needs SSH access. if you use an ansible module that uses the REST API from your ansible client, port 8006 also needs to be accessible. within a cluster, ports 22 and 8006 are always open. external access needs to be configured explicitly if you enable pve-firewall:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pve_firewall

Hi Fabian, I am able to create a VM with a verify simplified proxmox_kvm use within ansible. That works. But when I try to extend it to
configure more, I am getting an Internal Server Error exception 500 with the string data:

failed: [localhost] (item={'value': {u'node': u'ceph-hv-01', u'memory_size': u'12288', u'template': u'centos7-template', u'cores': u'4', u'ipconfig': u'{"ipconfig0":"ip=172.30.20.191/24,gw=172.30.20.1"}', u'net': u'{"net0":"virtio,bridge=vmbr320"}', u'cloudinit': True}, 'key': u'stu-test-10'}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "stu-test-10", "value": {"cloudinit": true, "cores": "4", "ipconfig": "{\"ipconfig0\":\"ip=172.30.20.191/24,gw=172.30.20.1\"}", "memory_size": "12288", "net": "{\"net0\":\"virtio,bridge=vmbr320\"}", "node": "ceph-hv-01", "template": "centos7-template"}}, "msg": "creation of qemu VM stu-test-10 with vmid 102 failed with exception=500 Internal Server Error: {\"data\":null}"}

Any thoughts on that?
 
Hi Fabian, I am able to create a VM with a verify simplified proxmox_kvm use within ansible. That works. But when I try to extend it to
configure more, I am getting an Internal Server Error exception 500 with the string data:

failed: [localhost] (item={'value': {u'node': u'ceph-hv-01', u'memory_size': u'12288', u'template': u'centos7-template', u'cores': u'4', u'ipconfig': u'{"ipconfig0":"ip=172.30.20.191/24,gw=172.30.20.1"}', u'net': u'{"net0":"virtio,bridge=vmbr320"}', u'cloudinit': True}, 'key': u'stu-test-10'}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "stu-test-10", "value": {"cloudinit": true, "cores": "4", "ipconfig": "{\"ipconfig0\":\"ip=172.30.20.191/24,gw=172.30.20.1\"}", "memory_size": "12288", "net": "{\"net0\":\"virtio,bridge=vmbr320\"}", "node": "ceph-hv-01", "template": "centos7-template"}}, "msg": "creation of qemu VM stu-test-10 with vmid 102 failed with exception=500 Internal Server Error: {\"data\":null}"}

Any thoughts on that?

try to find the API request it does, and try that manually/post it here?
 
Here is the complete log from Ansible's point of view and the code.

The repo it is based on is morph027's at https://gitlab.com/morph027/pve-infra-poc

This test has been simplified to be just the create vms code in run.yml in the above repo but with a few lines commented out (explanation below).

This particular version of morph027's run.yml, has the "arg" line commented out plus a few more, because it produces a .type error if I don't and it fails in syntax without going to the api for the error. Perhaps this is a result of the lookup at the arg line?

If you need more information, such as vars.yml, let me know and I will put that in with the appropriate sections omitted.


Code:
---

- name: deploy virtual machines

  hosts: localhost
  gather_facts: false

  handlers:

  - name: sleep
    pause:
      seconds: 10

  tasks:

  - name: include vars
    include_vars: 'vars.yml'

  - name: create vms
    proxmox_kvm:
      api_user: "{{ api_user }}"
      api_password: "{{ api_password }}"
      api_host: "{{ api_host }}"
      node: "{{ item.value.node }}"
      name: "{{ item.key }}"
      net: '{{ item.value.net | default(defaults.net) }}'
      scsihw: '{{ item.value.scsihw | default(defaults.scsihw) }}'
      virtio: '{{ item.value.virtio | default(defaults.virtio) }}'
      cores: '{{ item.value.cores | default(defaults.cores) }}'
      memory: '{{ item.value.memory_size | default(defaults.memory_size) }}'
      balloon: '{{ item.item.value.balloon | default(defaults.balloon) }}'
      vga: 'qxl'
      ostype: '{{ item.value.ostype | default(defaults.ostype) }}'
#      args: "{{ lookup('template', '{{ deployments[item.value.type].template }}') | replace('\n', '') }}"
      cpu: '{{ item.value.cpu | default(defaults.cpu) }}'
      state: present
    with_dict: "{{ vms }}"
    loop_control:
      pause: 5
    notify:
      - sleep
#    register: created_vms_pve
#    when: not item.value.cloudinit | default(false) | bool
#
#  - meta: flush_handlers
#    when: created_vms_pve.changed

The run of the above is in the next message since this site has a 10k limit.
 
Here's the run:

Code:
ansible-playbook -vvv runCreateVM3.yml
ansible-playbook 2.9.1
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/home/stuartcracraft/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /home/stuartcracraft/src/venv/local/lib/python2.7/site-packages/ansible
  executable location = /home/stuartcracraft/src/venv/bin/ansible-playbook
  python version = 2.7.15+ (default, Oct  7 2019, 17:39:04) [GCC 7.4.0]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin

PLAYBOOK: runCreateVM3.yml *********************************************************************************
1 plays in runCreateVM3.yml

PLAY [deploy virtual machines] *****************************************************************************
META: ran handlers

TASK [include vars] ****************************************************************************************
task path: /home/stuartcracraft/src/ansible-proxmox/roles/runCreateVM3.yml:16
ok: [localhost] => {
    "ansible_facts": {
        "api_host": "proxmox",
        "api_password": "XXXXXXXXX",
        "api_user": "ansible@pam",
        "defaults": {
            "balloon": "1024",
            "cores": "2",
            "cpu": "host",
            "memory_size": "2048",
            "net": "{\"net0\":\"virtio,bridge=vmbr0\"}",
            "ostype": "l26",
            "scsihw": "virtio-scsi-pci",
            "sshkeys": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDC6tHm8g0ta4jZ1M6MzfLRL/ZSCzSM4VSL0LRdd4iyiIy9rn8XLz+VPLPlYkpbB20e/OkAI+MvtlyZqZdrBb75QaFJTNs6TZ32kN1tSB2odGdtoz7QIVUoYLZ0nQbKHsLm4o+tja6ixeKv4fLY+5Z1QZviXT7Fm8C7fZmNPpI3LQTK0W+B7kITeHrZc7QTvyG3SsXV76Y5oAQRuf+xJ4aXIoZSQHyqSCYQEaW4gYvf0C+nLJHWoMnNKdQO0JYHLRMUYuGuGV3iyALflCnCRKhYpb2JE6mTq56+Il3uTK8A2akJogyl49y17i+uuchZ+kyxmvhZaKZj3OwWQf2zD55l root@mgmt-01",
            "virtio": "{\"scsi0\":\"sf-ceph:16,cache=writeback,discard=on\"}"
        },
        "deployments": {
            "centos": {
                "initrd": "http://ftp.hosteurope.de/mirror/centos.org/7/os/x86_64/isolinux/initrd.img",
                "kernel": "http://ftp.hosteurope.de/mirror/centos.org/7/os/x86_64/isolinux/vmlinuz",
                "ks": "https://gitlab.com/morph027/pve-infra-poc/raw/master/contrib/kickstart/server.ks",
                "repo": "http://ftp.hosteurope.de/mirror/centos.org/7/os/x86_64/",
                "stage2": "http://ftp.hosteurope.de/mirror/centos.org/7/os/x86_64/",
                "template": "deploy-args-centos.j2"
            },
            "ubuntu": {
                "initrd": "http://download.morph027.de/ubuntu-installer/initrd.gz",
                "kernel": "http://download.morph027.de/ubuntu-installer/linux",
                "preseed": "https://gitlab.com/morph027/pve-infra-poc/raw/master/contrib/preseed/server.seed",
                "template": "deploy-args-ubuntu.j2"
            }
        },
        "timeout": 600,
        "vms": {
            "stu-test-10": {
                "cloudinit": true,
                "cores": "4",
                "ipconfig": "{\"ipconfig0\":\"ip=172.30.20.191/24,gw=172.30.20.1\"}",
                "memory_size": "12288",
                "net": "{\"net0\":\"virtio,bridge=vmbr320\"}",
                "node": "ceph-hv-01",
                "template": "centos7-template"
            }
        }
    },
    "ansible_included_var_files": [
        "/home/stuartcracraft/src/ansible-proxmox/roles/vars.yml"
    ],
    "changed": false
}

TASK [create vms] ******************************************************************************************
task path: /home/stuartcracraft/src/ansible-proxmox/roles/runCreateVM3.yml:19
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: stuartcracraft
<127.0.0.1> EXEC /bin/sh -c 'echo ~stuartcracraft && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/stuartcracraft/.ansible/tmp/ansible-tmp-1575475862.73-217110140921697 `" && echo ansible-tmp-1575475862.73-217110140921697="` echo /home/stuartcracraft/.ansible/tmp/ansible-tmp-1575475862.73-217110140921697 `" ) && sleep 0'
Using module file /home/stuartcracraft/src/venv/local/lib/python2.7/site-packages/ansible/modules/cloud/misc/proxmox_kvm.py
<127.0.0.1> PUT /home/stuartcracraft/.ansible/tmp/ansible-local-323160KxAhB/tmpLp6yZd TO /home/stuartcracraft/.ansible/tmp/ansible-tmp-1575475862.73-217110140921697/AnsiballZ_proxmox_kvm.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/stuartcracraft/.ansible/tmp/ansible-tmp-1575475862.73-217110140921697/ /home/stuartcracraft/.ansible/tmp/ansible-tmp-1575475862.73-217110140921697/AnsiballZ_proxmox_kvm.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/home/stuartcracraft/src/venv/bin/python2 /home/stuartcracraft/.ansible/tmp/ansible-tmp-1575475862.73-217110140921697/AnsiballZ_proxmox_kvm.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/stuartcracraft/.ansible/tmp/ansible-tmp-1575475862.73-217110140921697/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
WARNING: The below traceback may *not* be related to the actual failure.
  File "/tmp/********_proxmox_kvm_payload_OTUqdC/********_proxmox_kvm_payload.zip/********/modules/cloud/misc/proxmox_kvm.py", line 1017, in main
  File "/tmp/********_proxmox_kvm_payload_OTUqdC/********_proxmox_kvm_payload.zip/********/modules/cloud/misc/proxmox_kvm.py", line 740, in create_vm
  File "/home/stuartcracraft/src/venv/local/lib/python2.7/site-packages/proxmoxer/core.py", line 96, in create
    return self.post(*args, **data)
  File "/home/stuartcracraft/src/venv/local/lib/python2.7/site-packages/proxmoxer/core.py", line 87, in post
    return self(args)._request("POST", data=data)
  File "/home/stuartcracraft/src/venv/local/lib/python2.7/site-packages/proxmoxer/core.py", line 79, in _request
    resp.content))
failed: [localhost] (item={'value': {u'node': u'ceph-hv-01', u'memory_size': u'12288', u'template': u'centos7-template', u'cores': u'4', u'ipconfig': u'{"ipconfig0":"ip=172.30.20.191/24,gw=172.30.20.1"}', u'net': u'{"net0":"virtio,bridge=vmbr320"}', u'cloudinit': True}, 'key': u'stu-test-10'}) => {
    "ansible_loop_var": "item",
    "changed": false,
    "invocation": {
        "module_args": {
            "acpi": true,
            "agent": null,
            "api_host": "proxmox",
            "api_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "api_user": "********@pam",
            "args": null,
            "autostart": false,
            "balloon": 1024,
            "bios": null,
            "boot": "cnd",
            "bootdisk": null,
            "clone": null,
            "cores": 4,
            "cpu": "host",
            "cpulimit": null,
            "cpuunits": 1000,
            "delete": null,
            "description": null,
            "digest": null,
            "force": null,
            "format": "qcow2",
            "freeze": null,
            "full": true,
            "hostpci": null,
            "hotplug": null,
            "hugepages": null,
            "ide": null,
            "keyboard": null,
            "kvm": true,
            "localtime": null,
            "lock": null,
            "machine": null,
            "memory": 12288,
            "migrate_downtime": null,
            "migrate_speed": null,
            "name": "stu-test-10",
            "net": {
                "net0": "virtio,bridge=vmbr320"
            },
            "newid": null,
            "node": "ceph-hv-01",
            "numa": null,
            "numa_enabled": null,
            "onboot": true,
            "ostype": "l26",
            "parallel": null,
            "pool": null,
            "protection": null,
            "reboot": null,
            "revert": null,
            "sata": null,
            "scsi": null,
            "scsihw": "virtio-scsi-pci",
            "serial": null,
            "shares": null,
            "skiplock": null,
            "smbios": null,
            "snapname": null,
            "sockets": 1,
            "startdate": null,
            "startup": null,
            "state": "present",
            "storage": null,
            "tablet": false,
            "target": null,
            "tdf": null,
            "template": false,
            "timeout": 30,
            "update": false,
            "validate_certs": false,
            "vcpus": null,
            "vga": "qxl",
            "virtio": {
                "scsi0": "sf-ceph:16,cache=writeback,discard=on"
            },
            "vmid": null,
            "watchdog": null
        }
    },
    "item": {
        "key": "stu-test-10",
        "value": {
            "cloudinit": true,
            "cores": "4",
            "ipconfig": "{\"ipconfig0\":\"ip=172.30.20.191/24,gw=172.30.20.1\"}",
            "memory_size": "12288",
            "net": "{\"net0\":\"virtio,bridge=vmbr320\"}",
            "node": "ceph-hv-01",
            "template": "centos7-template"
        }
    },
    "msg": "creation of qemu VM stu-test-10 with vmid 134 failed with exception=500 Internal Server Error: {\"data\":null}"
}

PLAY RECAP *************************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
 
Last edited:
VM creation with those parameters via the API works here:
Code:
curl 'https://PVEHOSTNAME/api2/extjs/nodes/NODENAME/qemu' \
  -H 'CSRFPreventionToken: TOKEN_VALUE' \
  -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' \
  -H 'Cookie: PVEAuthCookie=COOKIE_VALUE' \
  --data 'acpi=1&autostart=0&vmid=101&balloon=1024&boot=cnd&cores=4&cpu=host&cpuunits=1000&kvm=1&memory=12288&name=stu-test-10&net0=virtio%2Cbridge%3Dvmbr320&onboot=1&ostype=l26&scsihw=virtio-scsi-pci&sockets=1&tablet=0&template=0&vga=qxl&scsi0=STORAGE_NAME:16%2Ccache%3Dwriteback%2Cdiscard%3Don' \
  -k -v

please check your server side logs for any errors, and try to get the actual POST request out of the ansible module (maybe the creator can help you with that?)
 
The initial error, with nothing commented out in the run.yml (e.g. the args:, the register, the when, the meta, the when, is:

TASK [create vms] ************************************************************************************************

Code:
task path: /home/stuartcracraft/src/ansible-proxmox/roles/runCreateVM3.yml:19
fatal: [localhost]: FAILED! => {
    "msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute
'type'\n\nThe error appears to be in '/home/stuartcracraft/src/ansible-proxmox/roles/runCreateVM3.yml': line 19, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to
be:\n\n\n  - name: create vms\n    ^ here\n"
}

That's, specifically, I believe, the .type referenced in this line:

args: "{{ lookup('template', '{{ deployments[item.value.type].template }}') | replace('\n', '') }}"

which references this

# default deployment values
deployments:
  ubuntu:
    kernel: 'http://download.morph027.de/ubuntu-installer/linux'
    initrd: 'http://download.morph027.de/ubuntu-installer/initrd.gz'
    template: 'deploy-args-ubuntu.j2'
    preseed: 'https://gitlab.com/morph027/pve-infra-poc/raw/master/contrib/preseed/server.seed'
  centos:
    kernel: 'http://ftp.hosteurope.de/mirror/centos.org/7/os/x86_64/isolinux/vmlinuz'
    initrd: 'http://ftp.hosteurope.de/mirror/centos.org/7/os/x86_64/isolinux/initrd.img'
    template: 'deploy-args-centos.j2'
    stage2: 'http://ftp.hosteurope.de/mirror/centos.org/7/os/x86_64/'
    repo: 'http://ftp.hosteurope.de/mirror/centos.org/7/os/x86_64/'
    ks: 'https://gitlab.com/morph027/pve-infra-poc/raw/master/contrib/kickstart/server.ks'
VM creation with those parameters via the API works here:
Code:
curl 'https://PVEHOSTNAME/api2/extjs/nodes/NODENAME/qemu' \
  -H 'CSRFPreventionToken: TOKEN_VALUE' \
  -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' \
  -H 'Cookie: PVEAuthCookie=COOKIE_VALUE' \
  --data 'acpi=1&autostart=0&vmid=101&balloon=1024&boot=cnd&cores=4&cpu=host&cpuunits=1000&kvm=1&memory=12288&name=stu-test-10&net0=virtio%2Cbridge%3Dvmbr320&onboot=1&ostype=l26&scsihw=virtio-scsi-pci&sockets=1&tablet=0&template=0&vga=qxl&scsi0=STORAGE_NAME:16%2Ccache%3Dwriteback%2Cdiscard%3Don' \
  -k -v

please check your server side logs for any errors, and try to get the actual POST request out of the ansible module (maybe the creator can help you with that?)

On the proxmox server, where are the logs for the curl? It's just a very vanilla proxmox server. I want to be sure to collect the right log(s) for this to get past the server side error.

I assume somewhere in here:

Code:
/var/log# ls -ltra pve*
-rw-r--r-- 1 root     root     52400 Nov  2 04:26 pveam.log.0
-rw-r----- 1 root     adm        123 Nov 29 14:25 pve-firewall.log.7.gz
-rw-r----- 1 root     adm        124 Nov 30 14:25 pve-firewall.log.6.gz
-rw-r----- 1 root     adm        131 Dec  1 14:25 pve-firewall.log.5.gz
-rw-r----- 1 root     adm        123 Dec  2 14:25 pve-firewall.log.4.gz
-rw-r----- 1 root     adm        125 Dec  3 14:25 pve-firewall.log.3.gz
-rw-r----- 1 root     adm        125 Dec  4 14:25 pve-firewall.log.2.gz
-rw-r--r-- 1 root     root     43230 Dec  5 04:45 pveam.log
-rw-r----- 1 root     adm        179 Dec  5 14:25 pve-firewall.log.1
-rw-r----- 1 root     adm         55 Dec  5 14:25 pve-firewall.log

pve:
total 18
drwxr-xr-x  3 root root   3 Dec 17  2018 .
drwxr-xr-x 18 root root  22 Dec  5 04:45 tasks
drwxr-xr-x 16 root root 104 Dec  5 14:25 ..

pveproxy:
total 4057
-rw-r-----  1 www-data www-data   232982 Nov 29 14:25 access.log.7.gz
-rw-r-----  1 www-data www-data   544933 Nov 30 14:25 access.log.6.gz
-rw-r-----  1 www-data www-data   233037 Dec  1 14:25 access.log.5.gz
-rw-r-----  1 www-data www-data   232993 Dec  2 14:25 access.log.4.gz
-rw-r-----  1 www-data www-data   340042 Dec  3 14:25 access.log.3.gz
-rw-r-----  1 www-data www-data   402941 Dec  4 14:25 access.log.2.gz
drwx------  2 www-data www-data       10 Dec  5 14:25 .
drwxr-xr-x 16 root     root          104 Dec  5 14:25 ..
-rw-r-----  1 www-data www-data 22406109 Dec  5 14:25 access.log.1
-rw-r-----  1 www-data www-data  1400451 Dec  5 16:05 access.log
 
Last edited:
VM creation with those parameters via the API works here:
Code:
curl 'https://PVEHOSTNAME/api2/extjs/nodes/NODENAME/qemu' \
  -H 'CSRFPreventionToken: TOKEN_VALUE' \
  -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' \
  -H 'Cookie: PVEAuthCookie=COOKIE_VALUE' \
  --data 'acpi=1&autostart=0&vmid=101&balloon=1024&boot=cnd&cores=4&cpu=host&cpuunits=1000&kvm=1&memory=12288&name=stu-test-10&net0=virtio%2Cbridge%3Dvmbr320&onboot=1&ostype=l26&scsihw=virtio-scsi-pci&sockets=1&tablet=0&template=0&vga=qxl&scsi0=STORAGE_NAME:16%2Ccache%3Dwriteback%2Cdiscard%3Don' \
  -k -v

please check your server side logs for any errors, and try to get the actual POST request out of the ansible module (maybe the creator can help you with that?)

Regarding the above, also, what should be put in for the TOKEN_VALUE, COOKIE_VALUE, NODENAME placeholders?
 
This has been addressed and is workable. It was simply updating vars.yml from your latest gitlab version of pve-infra-poc and then the "type" variable, which was defined after our earlier vars.yml was seen.

Where we are now is that we have used proxmox_kvm to clone an existing VM but we have been unable to set its ip, gw, netmask, nameserver.

The latest run looks like this:
Code:
ansible-playbook runMorph.yml --ask-vault-pass
Vault password: XXXXXXXXX


PLAY [deploy virtual machines] ***********************************************************************************************

TASK [include vars] **********************************************************************************************************
ok: [localhost]

TASK [create vms] ************************************************************************************************************
failed: [localhost] (item={'value': {u'node': u'ceph-hv-01', u'ks': u'https://gitlab.example.com/kickstart/centos/raw/minimal-\
7.2.1511/centos.cfg', u'network': {u'ip': u'172.30.32.100', u'netmask': u'255.255.255.0', u'domainname': u'v20.lw.sourceforge.\
com', u'gateway': u'172.30.32.1', u'nameserver': u'172.30.20.67'}, u'repo': u'http://centos-cache.example.com/centos/7/os/x86_\
64', u'stage2': u'http://centos-cache.example.com/centos/7/os/x86_64', u'type': u'centos'}, 'key': u'centos-static'}) => {"ans\
ible_loop_var": "item", "changed": false, "item": {"key": "centos-static", "value": {"ks": "https://gitlab.example.com/kicksta\
rt/centos/raw/minimal-7.2.1511/centos.cfg", "network": {"domainname": "v20.lw.humblecompany.com", "gateway": "172.30.32.1", "ip"\
: "172.30.32.100", "nameserver": "172.30.20.67", "netmask": "255.255.255.0"}, "node": "ceph-hv-01", "repo": "http://centos-cac\
he.example.com/centos/7/os/x86_64", "stage2": "http://centos-cache.example.com/centos/7/os/x86_64", "type": "centos"}}, "msg":\
 "creation of qemu VM centos-static with vmid 137 failed with exception=500 Internal Server Error: {\"data\":null}"}

PLAY RECAP *******************************************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

We are successfully going in as root@pam at the proxmox server with ansible-vault but encountering the above unfortunate error.
We had thought maybe not going in as root@pam might have been the cause but it turned out not to be...

Has anyone else seen the above?

(This is basically the run.yml and vars.yml combination from the pve-infra-poc gitlab with only a few local trial updates.
I have gone to proxmox server and reviewed logs but see nothing special. It is running the latest proxmox_kvm.py which is in library/
-- so I am quite mystified at this point. Anyone else run into this yet?)

Stuart

P.S. How have people debugged/reviewed such 500 internal server errors on the proxmox server?
 
VM creation with those parameters via the API works here:
Code:
curl 'https://PVEHOSTNAME/api2/extjs/nodes/NODENAME/qemu' \
  -H 'CSRFPreventionToken: TOKEN_VALUE' \
  -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' \
  -H 'Cookie: PVEAuthCookie=COOKIE_VALUE' \
  --data 'acpi=1&autostart=0&vmid=101&balloon=1024&boot=cnd&cores=4&cpu=host&cpuunits=1000&kvm=1&memory=12288&name=stu-test-10&net0=virtio%2Cbridge%3Dvmbr320&onboot=1&ostype=l26&scsihw=virtio-scsi-pci&sockets=1&tablet=0&template=0&vga=qxl&scsi0=STORAGE_NAME:16%2Ccache%3Dwriteback%2Cdiscard%3Don' \
  -k -v

please check your server side logs for any errors, and try to get the actual POST request out of the ansible module (maybe the creator can help you with that?)

/var/log/pveproxy/access.log:

Code:
10.138.0.17 - root@pam [10/12/2019:21:25:28 +0000] "GET /api2/json/version HTTP/1.1" 200 79
10.138.0.17 - root@pam [10/12/2019:21:25:28 +0000] "GET /api2/json/cluster/nextid HTTP/1.1" 200 14
10.138.0.17 - root@pam [10/12/2019:21:25:28 +0000] "GET /api2/json/cluster/resources?type=vm HTTP/1.1" 200 6381
10.138.0.17 - root@pam [10/12/2019:21:25:28 +0000] "GET /api2/json/cluster/resources?type=vm HTTP/1.1" 200 6408
10.138.0.17 - root@pam [10/12/2019:21:25:28 +0000] "GET /api2/json/nodes HTTP/1.1" 200 742
10.138.0.17 - root@pam [10/12/2019:21:25:29 +0000] "POST /api2/json/nodes/ceph-hv-01/qemu HTTP/1.1" 500 13

So not much info there...

As the author of the module proxmox_kvm is not listed insofar as I can see at:

https://docs.ansible.com/ansible/latest/modules/proxmox_kvm_module.html

and the drilldown into the qemu code was fruitless.
 
/var/log/pveproxy/access.log:

Code:
10.138.0.17 - root@pam [10/12/2019:21:25:28 +0000] "GET /api2/json/version HTTP/1.1" 200 79
10.138.0.17 - root@pam [10/12/2019:21:25:28 +0000] "GET /api2/json/cluster/nextid HTTP/1.1" 200 14
10.138.0.17 - root@pam [10/12/2019:21:25:28 +0000] "GET /api2/json/cluster/resources?type=vm HTTP/1.1" 200 6381
10.138.0.17 - root@pam [10/12/2019:21:25:28 +0000] "GET /api2/json/cluster/resources?type=vm HTTP/1.1" 200 6408
10.138.0.17 - root@pam [10/12/2019:21:25:28 +0000] "GET /api2/json/nodes HTTP/1.1" 200 742
10.138.0.17 - root@pam [10/12/2019:21:25:29 +0000] "POST /api2/json/nodes/ceph-hv-01/qemu HTTP/1.1" 500 13

So not much info there...

As the author of the module proxmox_kvm is not listed insofar as I can see at:

https://docs.ansible.com/ansible/latest/modules/proxmox_kvm_module.html

and the drilldown into the qemu code was fruitless.

Authors
  • Abdoul Bah (@helldorado) <bahabdoul at gmail.com>

right at the bottom ;)
 
As i'm using this module a lot, i was co-authoring it and meanwhile we've found something (which was my first impression anyways ;)): It's just the storage which does not exists. We used qm create with all the options to debug further:

Code:
qm create 134 --scsi0 sf-ceph:16,cache=writeback,discard=on --net0 virtio,bridge=vmbr322 --bootdisk virtio0 --ostype\
 l26 --memory 2048 --cores 2 --args '-kernel /tmp/centos-vmlinuz-initrd /tmp/centos-initrd.img -append "inst.stage2=ht\
tp://centos-cache.example.com/centos/7/os/x86_64 inst.repo=http://centos-cache.example.com/centos/7/os/x86_64 inst.ks=\
https://gitlab.example.com/kickstart/centos/raw/minimal-7.2.1511/centos.cfg rd.noverifyssl net_cfg=#network --device=l\
ink --bootproto=static --ip=172.30.32.100 --netmask=255.255.255.0 --gateway=172.30.32.1 --nameserver=172.30.20.67 --no\
ipv6 --onboot yes --hostname=stu-centos-static"'
storage 'sf-ceph' does not exists
 
As i'm using this module a lot, i was co-authoring it and meanwhile we've found something (which was my first impression anyways ;)): It's just the storage which does not exists. We used qm create with all the options to debug further:

Code:
qm create 134 --scsi0 sf-ceph:16,cache=writeback,discard=on --net0 virtio,bridge=vmbr322 --bootdisk virtio0 --ostype\
l26 --memory 2048 --cores 2 --args '-kernel /tmp/centos-vmlinuz-initrd /tmp/centos-initrd.img -append "inst.stage2=ht\
tp://centos-cache.example.com/centos/7/os/x86_64 inst.repo=http://centos-cache.example.com/centos/7/os/x86_64 inst.ks=\
https://gitlab.example.com/kickstart/centos/raw/minimal-7.2.1511/centos.cfg rd.noverifyssl net_cfg=#network --device=l\
ink --bootproto=static --ip=172.30.32.100 --netmask=255.255.255.0 --gateway=172.30.32.1 --nameserver=172.30.20.67 --no\
ipv6 --onboot yes --hostname=stu-centos-static"'
storage 'sf-ceph' does not exists

it might make sense to make the module return the full information returned by the API (at least with some level of '-vvv')?

if I create a VM via the API on a non-existent storage, I get the following back:
Code:
< HTTP/1.1 200 OK
< Cache-Control: max-age=0
< Connection: close
< Connection: Keep-Alive
< Date: Fri, 13 Dec 2019 08:05:44 GMT
< Pragma: no-cache
< Server: pve-api-daemon/3.0
< Content-Length: 91
< Content-Type: application/json;charset=UTF-8
< Expires: Fri, 13 Dec 2019 08:05:44 GMT
<
* Closing connection 0
{"status":500,"data":null,"message":"storage 'doesnotexist' does not exists\n","success":0}
 
Remarkably, with doing the above:

Code:
pip install -U git+https://github.com/ssi444/proxmoxer@master

and receiving this confirmation it was already installed:

Code:
pip install -U git+https://github.com/ssi444/proxmoxer@master
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. More details abo\
ut Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
Collecting git+https://github.com/ssi444/proxmoxer@master
  Cloning https://github.com/ssi444/proxmoxer (to revision master) to /tmp/pip-req-build-LMdnZH
  Running command git clone -q https://github.com/ssi444/proxmoxer /tmp/pip-req-build-LMdnZH
  Running command git checkout -b master --track origin/master
  Switched to a new branch 'master'
  Branch 'master' set up to track remote branch 'master' from 'origin'.
Building wheels for collected packages: proxmoxer
  Building wheel for proxmoxer (setup.py) ... done
  Created wheel for proxmoxer: filename=proxmoxer-1.0.3-cp27-none-any.whl size=16611 sha256=99ac84633a459f62ec15f5db4ec174afaac9fd51cd570ab4a26afd56a829bba1
  Stored in directory: /tmp/pip-ephem-wheel-cache-ZsvVCF/wheels/2a/39/c4/2d20895f37cfd9f9f9cab02328b985ef4abf2a2710573460e1
Successfully built proxmoxer
Installing collected packages: proxmoxer
  Found existing installation: proxmoxer 1.0.3
    Uninstalling proxmoxer-1.0.3:
      Successfully uninstalled proxmoxer-1.0.3
Successfully installed proxmoxer-1.0.3

the outcome is improved with no 500 server-side errors on the Proxmox hypervisor although the instance is non-bootable and the cloud-init section in the Proxmox GUI is empty (see attachment).

The code section in morph's var.yml is:

Code:
  stu-centos-static:
    node: 'ceph-hv-01'
    clone: centos-ansible-tmp
    type: 'centos'
    network:
       ip: '172.30.32.100'
       netmask: '255.255.255.0'
       gateway: '172.30.32.1'
       nameserver: '172.30.20.67'
       domainname: 'v123.lw.agoodcompany.com'

So note that it is cloning centos-ansible-tmp template.

In this case now, would that need to have all the cloud-init setup via

https://pve.proxmox.com/wiki per Cloud-Init_Support and Cloud-Init_FAQ pages, specifically the former.

Thanks morph/fabian.
 

Attachments

  • Screen Shot 2019-12-13 at 10.01.00 AM.png
    Screen Shot 2019-12-13 at 10.01.00 AM.png
    37.7 KB · Views: 17
Understood.

I used this but am seeing the various screenshots attached to this post which show repeated failed kickstart reboots for a Centos static instance with hardwired ip/netmask/gateway/nameserver/domainname with the stage2/repo/ks from the example.

Anyone have the current stage2/repo/ks at cenos-cache.example.com and gitlab.example.com?

These are probably terribly out-of-date.

Code:
  stu-centos-static:
    node: 'ceph-hv-01'
    type: 'centos'
    network:
       ip: '172.30.32.100'
       netmask: '255.255.255.0'
       gateway: '172.30.32.1'
       nameserver: '172.30.20.67'
       domainname: 'v20.lw.agoodcompany.com'
    stage2: 'http://centos-cache.example.com/centos/7/os/x86_64'
    repo: 'http://centos-cache.example.com/centos/7/os/x86_64'
    ks: 'https://gitlab.example.com/kickstart/centos/raw/minimal-7.2.1511/centos.cfg'
 

Attachments

  • shot1.png
    shot1.png
    633 KB · Views: 20
  • shot2.png
    shot2.png
    325 KB · Views: 20

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!