[SOLVED] Creating clone using terraform always results in unused disk

damjank

Active Member
Apr 2, 2020
30
1
28
Hello,

I have created VM template using packer. Template is working properly, if I use PVE GUI, I can use prepared cloud-init config and deploy it perfectly. However, using terraform with Proxmox plugin from Telmate/proxmox version 3.0.1-rc1, I can create full clone without issue but it always results with unsed disk, the original disk, that is attached to clone. Now I fiddled with it around for some time but just cannot seem to get it to work properly - if I manually attach that cloned disk, everything works but - then that is not automation. Any help?

Here is the config:

Bash:
# Create the wef VM
resource "proxmox_vm_qemu" "wef" {

  name        = "wef"
  target_node = "pxh1"
  tags        = "test"
  vmid        = "501"

  # Clone from windows 2k19-cloudinit template
  clone = "ubuntu-server-focal-template"
  os_type = "cloud-init"

# Cloud init options
  cloudinit_cdrom_storage = "local-lvm"

  ipconfig0 = "ip=dhcp"

  memory       = 8192
  agent        = 1
  sockets      = 1
  cores        = 8

  # Set the boot disk paramters
  bootdisk     = "virtio"
  scsihw       = "virtio-scsi-single"

  disk {
    slot            = 0
    size            = "40G"
    type            = "virtio"
    storage         = "local-lvm"
    ssd             = 1
    discard         = "on"
  } # end disk

  # Set the network
  network {
    model = "virtio"
    bridge = "vmbr0"
  } # end first network block

  # Ignore changes to the network
  ## MAC address is generated on every apply, causing
  ## TF to think this needs to be rebuilt on every apply
  lifecycle {
     ignore_changes = [
       network
     ]
  } # end lifecycle

  # Cloud-init params
  #ipconfig0 = "ip=192.168.2.100/24,gw=192.168.2.1"

  #ssh_user = "ubuntu"
  #sshkeys  = var.ssh_keys
} # end proxmox_vm_qemu wef resource declaration

And the result:
Screenshot 2024-02-19 at 21.42.05.png
 
Proxmox plugin from Telmate/proxmox version 3.0.1-rc1
I am always surprised that the author/maintainer of the plugin is not the first agency to be asked a question about functionality of such plugin.

Admittedly I have not used this plugin, that said the configuration template you posted defines a disk but has no mention on how that disk should be attached to the VM. Yes, there is a SCSI controller, but there is no link between the disk and controller that is defined.
I.e. there is no equivalent of :qm set [id] --virtio0 disk-name

Code:
      --virtio[n] [file=]<volume> [,aio=<native|threads|io_uring>] [,backup=<1|0>] [,bps=<bps>] [,bps_max_length=<seconds>] [,bps_rd=<bps>] [,bps_rd_max_length=<seconds>] [,bps_wr=<bps>]
       [,bps_wr_max_length=<seconds>] [,cache=<enum>] [,cyls=<integer>] [,detect_zeroes=<1|0>] [,discard=<ignore|on>] [,format=<enum>] [,heads=<integer>] [,import-from=<source volume>] [,iops=<iops>]
       [,iops_max=<iops>] [,iops_max_length=<seconds>] [,iops_rd=<iops>] [,iops_rd_max=<iops>] [,iops_rd_max_length=<seconds>] [,iops_wr=<iops>] [,iops_wr_max=<iops>] [,iops_wr_max_length=<seconds>]
       [,iothread=<1|0>] [,mbps=<mbps>] [,mbps_max=<mbps>] [,mbps_rd=<mbps>] [,mbps_rd_max=<mbps>] [,mbps_wr=<mbps>] [,mbps_wr_max=<mbps>] [,media=<cdrom|disk>] [,replicate=<1|0>]
       [,rerror=<ignore|report|stop>] [,ro=<1|0>] [,secs=<integer>] [,serial=<serial>] [,shared=<1|0>] [,size=<DiskSize>] [,snapshot=<1|0>] [,trans=<none|lba|auto>] [,werror=<enum>]
           Use volume as VIRTIO hard disk (n is 0 to 15). Use the special syntax STORAGE_ID:SIZE_IN_GiB to allocate a new volume. Use STORAGE_ID:0 and the import-from parameter to import from an
           existing volume.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: _gabriel
Hello,
I have the same issue, would you please share the solution that you did find?
Thanks
I researched this issue and found out that it is very important to create the same bus device type (SCSI, Virtio, IDE) as in the template.

In my case, the template used SCSI, and when I defined the disks, everything worked perfectly.
I used the following code:

JSON:
disks {
  scsi {
    scsi0 {
      disk {
        size = "10G"
        storage = "VM"
      }
    }
  }
}


Everything worked perfectly.
I used provider
JSON:
proxmox = {
      source = "telmate/proxmox"
      version = "3.0.1-rc3"
    }
 

Attachments

  • ufS02iulGo.png
    ufS02iulGo.png
    46.6 KB · Views: 14
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!