[SOLVED] Add an additonal SCSI controller to a VM hardware like in VMware

markosoftfl

New Member
Nov 12, 2025
5
2
3
I'm new to proxmox and I Imported my linux vm that has 17 disk and 3 SCSI Controllers in Vmware. It fails to boot no matter what I do saying dirves are missing and I only see 1 SCSI controller in the VM in proxmox and the others weren't imported. I don't see anyway in the add hardware to add a SCSI controller. how do I do this? is it not possible to have multiple controllers in a VM?
 
Hi @markosoftfl , welcome to the forum.

I believe you need to switch to using Virtual SCSI Single controller. In this case a controller per disk will be created.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
So that the issue. I need all the disk on the same controller but 2 of them. This is a VCSA Vcetenr so it deployed like this. it created 3 controllers with 17 disk. disk 10 is on controller 1 and disk 16 is on controler 2 and all the other disk are on controller 0. so if all the disk are added to a seperate controller then it still will never boot. Is there really no way to add 3 controllers to a VM In Proxmox and set the controller per drive like in VMware? i can always keep this VCSA on vmware till I migrate off but then I still will have the same issue with my SQL Always on clusters that have od drive on controller 1 and logs and DB on controler 2 and then temp DB On controller 3. Is there anyway to specidy multiple controllers and then specify what controller goes to each disk?
 
This is a VCSA Vcetenr so it deployed like this. it created 3 controllers with 17 disk. disk 10 is on controller 1 and disk 16 is on controler 2 and all the other disk are on controller 0. so if all the disk are added to a seperate controller then it still will never boot.
That is an ESXi-dependent deployment, and it may not be something you can reproduce in PVE/QEMU. From a boot perspective, you only need a single disk. Additional disks may be required by the application itself, but the first step is to make the system boot successfully and then address the remaining disks afterward.

Given the purpose of the VCSA, if you are migrating away from VMware, is there even value in moving this VM over? Migrate and validate business-critical VMs/Apps. Once that is complete, you can shut down the VCSA and forget about it.

An MS SQL deployment is fundamentally different from VCSA in both design and requirements. SQL Server works correctly with a virtio-scsi single controller when installed natively. I don’t have direct experience with converting an existing SQL VM, but there is no technical reason it should not work.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
That is an ESXi-dependent deployment, and it may not be something you can reproduce in PVE/QEMU. From a boot perspective, you only need a single disk. Additional disks may be required by the application itself, but the first step is to make the system boot successfully and then address the remaining disks afterward.

Given the purpose of the VCSA, if you are migrating away from VMware, is there even value in moving this VM over? Migrate and validate business-critical VMs/Apps. Once that is complete, you can shut down the VCSA and forget about it.

An MS SQL deployment is fundamentally different from VCSA in both design and requirements. SQL Server works correctly with a virtio-scsi single controller when installed natively. I don’t have direct experience with converting an existing SQL VM, but there is no technical reason it should not work.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I understand that the main issue is VCSA. everydrive has a purpose and if they're missing botting is not possible. I can get it to recovery console. THis VCSA is not important it was easier to test with than a production SQL server with multiple SQL Always on nodes that all have multiple controllers and the same issue. the SQL Qluster I can get to boot no issue but the log drives DB drives and all would be able to attach no issue and see again but the UUID Of the drive woudl change rendering the SQL cluster useless. this is why I tried the VCSA first instead of a cluster node. To do this I have to build new cluster nodes in PRoxmox link them to the AO cluster and fail over to them. While this would work I'm still just not happy about the controller not be available in PROXMOX and affect how I would have to migrate 8000 VM across 82 ESX NOdes if I were to chose to move forward with Proxmox. I'll do some more testing as I need to test HA and cluster and other items to make this decision. THe one thing that works well is my veeam server so far if it a simple VM with one controller the restore works flawless.

Thanks for you help with this quesstion Im just not sure with the amount of VM we have and the complexity now of migrating to Proxmox if it would make sense for us.
 
You cannot add multiple SCSI controllers through the PVE UI. To do so, you will need to experiment with direct QEMU options via --args.

I am not sure which UUID your SQL installation is bound to, as there are several possibilities. Most of these identifiers will change once you modify almost everything about the disk configuration: controller type, driver, disk model, disk serial number, etc.

Given your scale, you may want to engage a PVE partner or, at a minimum, purchase a support subscription so you can obtain guidance directly from the developers. I'd imagine there is a significant number of support subscriptions on the line with 8000 VMs.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
You cannot add multiple SCSI controllers through the PVE UI. To do so, you will need to experiment with direct QEMU options via --args.

I am not sure which UUID your SQL installation is bound to, as there are several possibilities. Most of these identifiers will change once you modify almost everything about the disk configuration: controller type, driver, disk model, disk serial number, etc.

Given your scale, you may want to engage a PVE partner or, at a minimum, purchase a support subscription so you can obtain guidance directly from the developers. I'd imagine there is a significant number of support subscriptions on the line with 8000 VMs.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Hello you are correct I finally figured this out with the help of Deepseek. I was able to get the commands in the config file
Here the example incase anyone else is looking.
# Add this line to your VMID.conf file
args: -device virtio-scsi-pci,id=scsihw1,bus=pci.0 -device virtio-scsi-pci,id=scsihw2,bus=pci.0

# Keep your OS disk managed by Proxmox normally
scsi0: local-lvm:vm-100-disk-0,size=40G

# Manually add the other two to scsihw1 (the second controller)
args: -device virtio-scsi-pci,id=scsihw1,bus=pci.0 \
-drive file=/dev/pve/vm-100-disk-1,if=none,id=drive-scsi15,format=raw \
-device scsi-hd,bus=scsihw1.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi15,id=scsi15 \
-drive file=/dev/pve/vm-100-disk-2,if=none,id=drive-scsi16,format=raw \
-device scsi-hd,bus=scsihw1.0,channel=0,scsi-id=1,lun=0,drive=drive-scsi16,id=scsi16
 
FYI I was able to get up to (22) vdisks with a single proxmox VM back in 2024 by using virtio, sata and ide (still have 3 empty slots there, only cdrom is used.) This is for testing ZFS DRAID in vm.

But I had to change the vmbios to UEFI and machine type q35 to get it to boot and see everything properly, and per-disk performance will be slower with virtual sata (0-5) and virtual IDE (0-4.)

Changing the VM Options in GUI is very slow with this number of disks and IIRC I had to manually edit the config to get the boot order:
" first disk, any cd-rom, any net "

The max number of disks I can get up to for a single VM using a script appears to be (56) and booting from IDE0 HD. Attemping to add another disk via GUI fails after this. Disk 57 is the EFI disk, which is invisible to the vm OS.

https://github.com/kneutron/ansitest/blob/master/proxmox/proxmox-zfs-test-draid-vm-create-disks.sh

You start running into quirks with this many disks, /dev/disk/by-id doesn't have everything in it by far and you start having to go with by-path.

You also get device assignments like sdaa -> sdan and vda -> vdp.

I may try your method of adding more controllers and push the theoretical max...
 
Last edited: