[SOLVED] Proxmox VE Problems Cloud Init Terraform

nozomi

New Member
Apr 22, 2025
2
0
1
Hi. I'm having problems cloning with Terraform my template I did following this video: https://www.youtube.com/watch?v=MJgIm03Jxdo&ab_channel=LearnLinuxTV
The template works fine, as in Proxmox GUI I can clone as many VMs as I want and it works perfectly. However, when I try doing it with Terraform, it either doesn't have cloudinit, or the console can't connect to server. My provider is telmate and the version is 3.0.1-rc8.

I appreciate any help I could get.

Thank you,


Code:
resource "proxmox_vm_qemu" "VM" {
  name         = "VM"
  target_node  = var.proxmox_node
  clone        = var.templateVM
  vmid         = 400
  full_clone   = true
  agent        = 1
  os_type      = "cloud-init"
  cores        = 2
  memory       = 2048
  scsihw       = "virtio-scsi-single"
  disk {
    size     = "32G"
    slot     = "scsi0"
    storage  = "secondHDD"
    //discard  = false
  }
  network {
    id         = 0
    model      = "virtio"
    bridge     = "vmbr0"
    firewall   = false
    link_down  = false # disables network on boot
  }
  ciuser = var.user
  cipassword = var.password
  ipconfig0   = "ip=192.168.128.10/24,gw=192.168.128.1"
  sshkeys = <<EOF
  ${var.ssh_public_key}
  EOF
}

I may even dare to say, that at first it clones correctly but then it does something that makes it not work. + When doing terraform apply it doesnt finish applying it...
 
Hi @nozomi, welcome to the forum.

Tools like Terraform, Ansible, etc., that manipulate VMs externally are developed and maintained by third parties. Their configuration options don’t always translate directly to native PVE configurations. Some entries can trigger multiple API calls under the hood, and may not map one-to-one.

To understand what’s happening, you’ll want to either increase verbosity/debug logging in Terraform or trace the API activity directly. Watching logs on the PVE side might help too, i.e. "journalctl -f".

Another option is to start with the most minimal working config and build up from there, step-by-step.

As an example of "proprietary" configuration option - PVE doesn’t have an os_type field. There is "ostype" field, and cloud-init isn’t a valid value for it.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I now followed the "Preparing Cloud Init temples" https://pve.proxmox.com/wiki/Cloud-Init_Support section in this link. I used the cloud image https://cloud-images.ubuntu.com/min...lease/ubuntu-22.04-minimal-cloudimg-amd64.img. The VM is 9000. This is the code updated:

Code:
resource "proxmox_vm_qemu" "VM" {
  name         = "VM"
  target_node  = var.proxmox_node
  clone        = "VM 9000"
  vmid         = 410
  full_clone   = true

  agent        = 1

  cores        = 2
  memory       = 2048
  os_type     = "cloud-init"

  scsihw       = "virtio-scsi-pci" # Matches your template
  cpu_type     = "host"
  vcpus        = 0

  disks {
    ide {
      ide2 {
        cloudinit {
          storage = "local-lvm"
        }
      }
    }

     scsi {
      scsi0 {
        disk {
          size         = 32
          storage      = "secondHDD"
          cache        = "writeback"
          iothread     = true
          discard      = true
        }
      }
    }
  }

  network {
    id         = 0
    model      = "virtio"
    bridge     = "vmbr0"
  }

  # ciuser = var.user
  # cipassword = var.password

  ipconfig0   = "ip=192.168.128.10/24,gw=192.168.128.1"

  sshkeys = <<EOF
  ${var.ssh_public_key}
  EOF

}

It now works. I based my template from https://github.com/Telmate/terraform-provider-proxmox/blob/master/examples/cloudinit_example.tf. The only thing I don't understand yet is why in the PROXMOX VE GUI, the console can't connect to server, however when I ssh from outside doing ssh user@ip it works.

I'm not sure what could I do to make the GUI work, and also to improve the security of the VM, as I want to make it a honeypot. If anyone has any suggestions I would appreciate it a lot. :)
 
The only thing I don't understand yet is why in the PROXMOX VE GUI, the console can't connect to server, however when I ssh from outside doing ssh user@ip it works.
The GUI opens a Console window that connects to a Serial interface, not SSH. Your TF template has no mention of the serial console. Perhaps your VM Template does, we don't know since you have not posted what you are actually cloning.

Now that you created a template - clone it manually via PVE, boot the clone and see if it works to your satisfaction. If something is not working - continue working on it, without TF, until your are happy. When you know that manual clone works, then TF clone should work too.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I now followed the "Preparing Cloud Init temples" https://pve.proxmox.com/wiki/Cloud-Init_Support section in this link. I used the cloud image https://cloud-images.ubuntu.com/min...lease/ubuntu-22.04-minimal-cloudimg-amd64.img. The VM is 9000. This is the code updated:

Code:
resource "proxmox_vm_qemu" "VM" {
  name         = "VM"
  target_node  = var.proxmox_node
  clone        = "VM 9000"
  vmid         = 410
  full_clone   = true

  agent        = 1

  cores        = 2
  memory       = 2048
  os_type     = "cloud-init"

  scsihw       = "virtio-scsi-pci" # Matches your template
  cpu_type     = "host"
  vcpus        = 0

  disks {
    ide {
      ide2 {
        cloudinit {
          storage = "local-lvm"
        }
      }
    }

     scsi {
      scsi0 {
        disk {
          size         = 32
          storage      = "secondHDD"
          cache        = "writeback"
          iothread     = true
          discard      = true
        }
      }
    }
  }

  network {
    id         = 0
    model      = "virtio"
    bridge     = "vmbr0"
  }

  # ciuser = var.user
  # cipassword = var.password

  ipconfig0   = "ip=192.168.128.10/24,gw=192.168.128.1"

  sshkeys = <<EOF
  ${var.ssh_public_key}
  EOF

}

It now works. I based my template from https://github.com/Telmate/terraform-provider-proxmox/blob/master/examples/cloudinit_example.tf. The only thing I don't understand yet is why in the PROXMOX VE GUI, the console can't connect to server, however when I ssh from outside doing ssh user@ip it works.

I'm not sure what could I do to make the GUI work, and also to improve the security of the VM, as I want to make it a honeypot. If anyone has any suggestions I would appreciate it a lot. :)

You need to add serial port for the Proxmox UI console:
Code:
  serial {
    id   = 0
    type = "socket"
  }
  vga {
    type = "serial0"
  }

i used the this tutorial to create the template: https://www.learnlinux.tv/proxmox-ve-how-to-build-an-ubuntu-22-04-template-updated-method/

you can also add the serial terminal to the deplyoed image by running the following comand on the proxmx node:

qm set 9001 --serial0 socket --vga serial0
 
You need to add serial port for the Proxmox UI console:
Code:
  serial {
    id   = 0
    type = "socket"
  }
  vga {
    type = "serial0"
  }

i used the this tutorial to create the template: https://www.learnlinux.tv/proxmox-ve-how-to-build-an-ubuntu-22-04-template-updated-method/

you can also add the serial terminal to the deplyoed image by running the following comand on the proxmx node:

qm set 9001 --serial0 socket --vga serial0
I got it running with this:

Code:
# ~/terraform/proxmox/main.tf

resource "proxmox_vm_qemu" "vm" {
  for_each = var.virtual_machines

  # --- VM Naming and Placement ---
  name        = each.key
  target_node = each.value.target_node
  onboot      = true

  # --- Template ---
  clone = "ubuntu-template-qemu"

  # --- Hardware Resources ---
  cpu {
    cores = each.value.cpu_cores
    type  = "qemu64"
  }
  memory = each.value.memory
  scsihw = "virtio-scsi-single"

  # --- Disk Configuration ---
  # Block for the main OS disk
  disk {
    slot    = "scsi0"
    size    = "${each.value.hdd_size}G"
    type    = "disk"
    storage = "local-lvm"
  }
  # Explicitly define the Cloud-Init drive to guarantee it exists.
  disk {
    slot    = "ide0"
    type    = "cloudinit"
    storage = "local-lvm"
  }

  # --- Network Configuration ---
  network {
    id      = 0
    model   = "virtio"
    bridge  = "vmbr0"
    macaddr = each.value.mac_address
  }

  # --- Cloud-Init and Console Configuration ---
  os_type   = "cloud-init"
  ciuser    = "boldturtle"
  ipconfig0 = "ip=10.0.0.50/24,gw=10.0.0.1"
  sshkeys   = var.ssh_key

  # --- Critical settings ---
  agent = 1
  # The boot order MUST reference both the OS disk and the Cloud-Init disk.
  boot  = "order=scsi0;ide0"

  serial {
    id   = 0
    type = "socket"
  }

  vga {
    type = "serial0"
  }
}

Code:
# ~/terraform/proxmox/vars.tf

variable "proxmox_host" {
  description = "The IP address or FQDN of your Proxmox host."
  default     = "10.0.0.16"
}

variable "ssh_key" {
  description = "The public SSH key to install on new VMs."
  default     = "ssh-ed25519 my-ssh-puh-key"
}

variable "virtual_machines" {
  description = "A map of virtual machines for the Wazuh cluster."
  type        = map(any)
  default = {
    // --- Wazuh Indexer Nodes (3) ---
    "wazuh-indexer-1" = {
      vmid         = 201
      mac_address = "BC:24:11:00:02:01"
      ip = "10.0.0.61"
      target_node = "max-alpha"
      cpu_cores   = 2
      memory      = 4096 // 4GB RAM
      hdd_size    = 32
     
    },
    "wazuh-indexer-2" = {
      vmid         = 202
      mac_address = "BC:24:11:00:02:02"
      ip = "10.0.0.62"
      target_node = "max-beta"
      cpu_cores   = 2
      memory      = 4096
      hdd_size    = 32
    },

    "wazuh-indexer-3" = {
      mac_address = "BC:24:11:00:02:03"
      ip = "10.0.0.63"
      target_node = "max-beta"
      cpu_cores   = 2
      memory      = 4096
      hdd_size    = 32
    },

    // --- Wazuh Server Nodes (2) ---
    "wazuh-server-1" = {
      mac_address = "BC:24:11:00:02:04"
      ip = "10.0.0.64"
      target_node = "max-alpha"
      cpu_cores   = 2
      memory      = 4096
      hdd_size    = 32
    },
    "wazuh-server-2" = {
      mac_address = "BC:24:11:00:02:05"
      ip = "10.0.0.65"
      target_node = "max-beta"
      cpu_cores   = 2
      memory      = 4096
      hdd_size    = 32
    },

    // --- Wazuh Dashboard Nodes (2) ---
    "wazuh-dashboard-1" = {
      mac_address = "BC:24:11:00:02:06"
      ip = "10.0.0.66"
      target_node = "max-alpha"
      cpu_cores   = 2
      memory      = 4096
      hdd_size    = 32
    },
    "wazuh-dashboard-2" = {
      mac_address = "BC:24:11:00:02:07"
      ip = "10.0.0.67"
      target_node = "max-beta"
      cpu_cores   = 2
      memory      = 4096
      hdd_size    = 32
    },
    "redis-1" = {
      mac_address = "BC:24:11:00:02:08"
      ip = "10.0.0.68"
      target_node = "max-beta"
      cpu_cores   = 2
      memory      = 4096
      hdd_size    = 32
    }


  }
}


Code:
# ~/terraform/proxmox/provider.tf

terraform {
  required_providers {
    proxmox = {
      source = "Telmate/proxmox"
      version = "3.0.2-rc01"
    }
  }
}
 
I now followed the "Preparing Cloud Init temples" https://pve.proxmox.com/wiki/Cloud-Init_Support section in this link. I used the cloud image https://cloud-images.ubuntu.com/min...lease/ubuntu-22.04-minimal-cloudimg-amd64.img. The VM is 9000. This is the code updated:

Code:
resource "proxmox_vm_qemu" "VM" {
  name         = "VM"
  target_node  = var.proxmox_node
  clone        = "VM 9000"
  vmid         = 410
  full_clone   = true

  agent        = 1

  cores        = 2
  memory       = 2048
  os_type     = "cloud-init"

  scsihw       = "virtio-scsi-pci" # Matches your template
  cpu_type     = "host"
  vcpus        = 0

  disks {
    ide {
      ide2 {
        cloudinit {
          storage = "local-lvm"
        }
      }
    }

     scsi {
      scsi0 {
        disk {
          size         = 32
          storage      = "secondHDD"
          cache        = "writeback"
          iothread     = true
          discard      = true
        }
      }
    }
  }

  network {
    id         = 0
    model      = "virtio"
    bridge     = "vmbr0"
  }

  # ciuser = var.user
  # cipassword = var.password

  ipconfig0   = "ip=192.168.128.10/24,gw=192.168.128.1"

  sshkeys = <<EOF
  ${var.ssh_public_key}
  EOF

}

It now works. I based my template from https://github.com/Telmate/terraform-provider-proxmox/blob/master/examples/cloudinit_example.tf. The only thing I don't understand yet is why in the PROXMOX VE GUI, the console can't connect to server, however when I ssh from outside doing ssh user@ip it works.

I'm not sure what could I do to make the GUI work, and also to improve the security of the VM, as I want to make it a honeypot. If anyone has any suggestions I would appreciate it a lot. :)
try to use vga std (standard vga) then the gui console should work