Hi,
I'm using an ubuntu 24.04 cloud image from official repo. I do virt-customize actions and then push the setup by cli to proxmox to make a template.
I need a disk for my system and another disk for my longhorn storage as this VM will be a kubernetes worker.
- cloud init disk is scsi0
- my system disk is scsi1
- and my longhorn disk is scsi2
Here is my script to prepare my template( not perfect, I have to correct some things but working):
I stored my template on vm-share which is a nfs storage.
my source sh:
As I'm using EFI and ovfm bios, I need to use scsi for cloud image. Ide no more working from what I red and test.
After preparing my template to my proxmox pve2 , I want to create a clone for my pve5.
I do this by terraform with bpg provider and everything is ok until I watch deeply in my VM.
The final result is ok. scsi1 and scsi2 are i the right order.
But , in the running VM disks are inverted.
if we check by dmesg, we can see that it catch scsi2 in the first order.
I try to search and see if others persons have a problem like me but not so easy to find.
I imagine a workaround, by ansible creating a udev rule to add labels for scsi1 and scsi2 and change my fstab but it's ugly.
Before I was using ubuntu 22.04 and ide2 for cloud init and don't have that sort of problem.
Do you have an idea or an advise ?
thank you.
I'm using an ubuntu 24.04 cloud image from official repo. I do virt-customize actions and then push the setup by cli to proxmox to make a template.
I need a disk for my system and another disk for my longhorn storage as this VM will be a kubernetes worker.
- cloud init disk is scsi0
- my system disk is scsi1
- and my longhorn disk is scsi2
Here is my script to prepare my template( not perfect, I have to correct some things but working):
I stored my template on vm-share which is a nfs storage.
my source sh:
Bash:
PROXMOX_HOST="pve2"
IMAGE_NAME='ubuntu-24.04-minimal-cloudimg-amd64.img'
TYPE='minimal'
RELEASE='noble'
VMID='9001'
VM_TEMPLATE_NAME="ubuntu-24.04-LTS-TPL"
VM_POOL='vm-share'
EFI_VM_POOL='local-lvm'
IMG_URL="https://cloud-images.ubuntu.com/$TYPE/releases/$RELEASE/release/$IMAGE_NAME"
Bash:
#!/bin/bash
variables()
{
source ubuntu-24.04-variables.sh
}
download_vm_image()
{
if [ ! -f $IMAGE_NAME ]
then
wget $IMG_URL
fi
}
prepare_image()
{
echo "customize image\n"
virt-customize -a $IMAGE_NAME --root-password password:xxxxxxx
virt-customize -a $IMAGE_NAME --install qemu-guest-agent --run-command 'systemctl enable qemu-guest-agent.service'
virt-customize -a $IMAGE_NAME --run-command 'useradd -s /bin/bash admvm'
virt-customize -a $IMAGE_NAME --run-command 'passwd -s /bin/bash admvm'
virt-customize -a $IMAGE_NAME --run-command 'mkdir -p /home/admvm/.ssh && chmod 700 /home/admvm/.ssh'
virt-customize -a $IMAGE_NAME --run-command 'chown -R admvm:admvm /home/admvm'
}
prepare_efi_vm()
{
echo -e "Prepare VM\n"
qm create $VMID --memory 2048 --balloon 1 --core 2 --name $VM_TEMPLATE_NAME --net0 virtio,bridge=vmbr0
qm set $VMID --scsihw virtio-scsi-pci
qm set $VMID --bios ovmf --machine q35
#qm set $VMID -efidisk0 $VM_POOL:0,efitype=4m,pre-enrolled-keys=0,format=raw,size=4m
qm set $VMID -efidisk0 $EFI_VM_POOL:0,efitype=4m,pre-enrolled-keys=0,format=raw
qm set $VMID --scsi0 $VM_POOL:cloudinit
qm set $VMID --serial0 socket --vga serial0
qm set $VMID --scsi1 $VM_POOL:0,import-from=$PWD/$IMAGE_NAME
qm set $VMID --boot order='scsi1'
qm set $VMID --boot c --bootdisk scsi1
qm set $VMID --agent enabled=1
qm template $VMID
}
main()
{
variables
download_vm_image
prepare_image
prepare_efi_vm
}
main
As I'm using EFI and ovfm bios, I need to use scsi for cloud image. Ide no more working from what I red and test.
After preparing my template to my proxmox pve2 , I want to create a clone for my pve5.
I do this by terraform with bpg provider and everything is ok until I watch deeply in my VM.
The final result is ok. scsi1 and scsi2 are i the right order.
But , in the running VM disks are inverted.
if we check by dmesg, we can see that it catch scsi2 in the first order.
I try to search and see if others persons have a problem like me but not so easy to find.
I imagine a workaround, by ansible creating a udev rule to add labels for scsi1 and scsi2 and change my fstab but it's ugly.
Before I was using ubuntu 22.04 and ide2 for cloud init and don't have that sort of problem.
Do you have an idea or an advise ?
thank you.
Last edited: