scsi disk inverted on Ubuntu 24.04 cloudinit image

S3LL1G82

New Member
May 14, 2024
4
0
1
Hi,

I'm using an ubuntu 24.04 cloud image from official repo. I do virt-customize actions and then push the setup by cli to proxmox to make a template.

I need a disk for my system and another disk for my longhorn storage as this VM will be a kubernetes worker.

- cloud init disk is scsi0
- my system disk is scsi1
- and my longhorn disk is scsi2

Here is my script to prepare my template( not perfect, I have to correct some things but working):

I stored my template on vm-share which is a nfs storage.

my source sh:

Bash:
PROXMOX_HOST="pve2"
IMAGE_NAME='ubuntu-24.04-minimal-cloudimg-amd64.img'
TYPE='minimal'
RELEASE='noble'
VMID='9001'
VM_TEMPLATE_NAME="ubuntu-24.04-LTS-TPL"
VM_POOL='vm-share'
EFI_VM_POOL='local-lvm'
IMG_URL="https://cloud-images.ubuntu.com/$TYPE/releases/$RELEASE/release/$IMAGE_NAME"


Bash:
#!/bin/bash

variables()
{
source ubuntu-24.04-variables.sh
}

download_vm_image()
{
if [ ! -f $IMAGE_NAME ]
then
wget $IMG_URL
fi
}

prepare_image()
{
echo "customize image\n"
virt-customize -a $IMAGE_NAME --root-password password:xxxxxxx
virt-customize -a $IMAGE_NAME --install qemu-guest-agent --run-command 'systemctl enable qemu-guest-agent.service'
virt-customize -a $IMAGE_NAME --run-command 'useradd -s /bin/bash admvm'
virt-customize -a $IMAGE_NAME --run-command 'passwd -s /bin/bash admvm'
virt-customize -a $IMAGE_NAME --run-command 'mkdir -p /home/admvm/.ssh && chmod 700 /home/admvm/.ssh'
virt-customize -a $IMAGE_NAME --run-command 'chown -R admvm:admvm /home/admvm'
}

prepare_efi_vm()
{
echo -e "Prepare VM\n"
qm create $VMID --memory 2048 --balloon 1 --core 2 --name $VM_TEMPLATE_NAME --net0 virtio,bridge=vmbr0
qm set $VMID --scsihw virtio-scsi-pci
qm set $VMID --bios ovmf --machine q35
#qm set $VMID -efidisk0 $VM_POOL:0,efitype=4m,pre-enrolled-keys=0,format=raw,size=4m
qm set $VMID -efidisk0 $EFI_VM_POOL:0,efitype=4m,pre-enrolled-keys=0,format=raw
qm set $VMID --scsi0 $VM_POOL:cloudinit
qm set $VMID --serial0 socket --vga serial0
qm set $VMID --scsi1 $VM_POOL:0,import-from=$PWD/$IMAGE_NAME
qm set $VMID --boot order='scsi1'
qm set $VMID --boot c --bootdisk scsi1
qm set $VMID --agent enabled=1
qm template $VMID
}



main()
{
variables
download_vm_image
prepare_image
prepare_efi_vm
}


main

As I'm using EFI and ovfm bios, I need to use scsi for cloud image. Ide no more working from what I red and test.


After preparing my template to my proxmox pve2 , I want to create a clone for my pve5.

I do this by terraform with bpg provider and everything is ok until I watch deeply in my VM.

The final result is ok. scsi1 and scsi2 are i the right order.



1731228681767.png



But :(, in the running VM disks are inverted.

1731228790449.png


if we check by dmesg, we can see that it catch scsi2 in the first order. :(

1731229103880.png


I try to search and see if others persons have a problem like me but not so easy to find.

I imagine a workaround, by ansible creating a udev rule to add labels for scsi1 and scsi2 and change my fstab but it's ugly.

Before I was using ubuntu 22.04 and ide2 for cloud init and don't have that sort of problem.

Do you have an idea or an advise ?

thank you.
 
Last edited:
I volontary try to put ide2 to cloudinit and then the order is the right one but network is not ok. :(
I will try with a scsi10 on cloudinit and put scsi0 to my os disk and scsi1 to the longhorn one.
 
With scsi10 for cloudinit, it's working. So I will say, scsi0 for OS is necessary.
It's a workaround and I'm asking again what is the better way to have all controller in the right order.
 
Hi @S3LL1G82 , welcome to the forum.

Back in the day when I had actual SCSI card with SCSI devices and IDs set via a jumper, I'd never put a CDrom as ID 0.

I did not look at the actual code but it's not unreasonable to expect virtualization emulation to behave the same way as hardware did. Remember, much of the code in Linux is designed for real hardware, virtualization comes after and tries to mimic the hardware as much as possible so that no changes are needed.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi @bbgeek17 ,

My apologies for my late answer. Ok understood no more scsi 0 for CDROM ;).

Yeah I know thta baremetal is not VM, but it's working well generally.

Thank you for your confirmation about SCSI0.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!