Add an additonal SCSI controller to a VM hardware like in VMware

markosoftfl

New Member
Nov 12, 2025
3
0
1
I'm new to proxmox and I Imported my linux vm that has 17 disk and 3 SCSI Controllers in Vmware. It fails to boot no matter what I do saying dirves are missing and I only see 1 SCSI controller in the VM in proxmox and the others weren't imported. I don't see anyway in the add hardware to add a SCSI controller. how do I do this? is it not possible to have multiple controllers in a VM?
 
Hi @markosoftfl , welcome to the forum.

I believe you need to switch to using Virtual SCSI Single controller. In this case a controller per disk will be created.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
So that the issue. I need all the disk on the same controller but 2 of them. This is a VCSA Vcetenr so it deployed like this. it created 3 controllers with 17 disk. disk 10 is on controller 1 and disk 16 is on controler 2 and all the other disk are on controller 0. so if all the disk are added to a seperate controller then it still will never boot. Is there really no way to add 3 controllers to a VM In Proxmox and set the controller per drive like in VMware? i can always keep this VCSA on vmware till I migrate off but then I still will have the same issue with my SQL Always on clusters that have od drive on controller 1 and logs and DB on controler 2 and then temp DB On controller 3. Is there anyway to specidy multiple controllers and then specify what controller goes to each disk?
 
This is a VCSA Vcetenr so it deployed like this. it created 3 controllers with 17 disk. disk 10 is on controller 1 and disk 16 is on controler 2 and all the other disk are on controller 0. so if all the disk are added to a seperate controller then it still will never boot.
That is an ESXi-dependent deployment, and it may not be something you can reproduce in PVE/QEMU. From a boot perspective, you only need a single disk. Additional disks may be required by the application itself, but the first step is to make the system boot successfully and then address the remaining disks afterward.

Given the purpose of the VCSA, if you are migrating away from VMware, is there even value in moving this VM over? Migrate and validate business-critical VMs/Apps. Once that is complete, you can shut down the VCSA and forget about it.

An MS SQL deployment is fundamentally different from VCSA in both design and requirements. SQL Server works correctly with a virtio-scsi single controller when installed natively. I don’t have direct experience with converting an existing SQL VM, but there is no technical reason it should not work.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
That is an ESXi-dependent deployment, and it may not be something you can reproduce in PVE/QEMU. From a boot perspective, you only need a single disk. Additional disks may be required by the application itself, but the first step is to make the system boot successfully and then address the remaining disks afterward.

Given the purpose of the VCSA, if you are migrating away from VMware, is there even value in moving this VM over? Migrate and validate business-critical VMs/Apps. Once that is complete, you can shut down the VCSA and forget about it.

An MS SQL deployment is fundamentally different from VCSA in both design and requirements. SQL Server works correctly with a virtio-scsi single controller when installed natively. I don’t have direct experience with converting an existing SQL VM, but there is no technical reason it should not work.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I understand that the main issue is VCSA. everydrive has a purpose and if they're missing botting is not possible. I can get it to recovery console. THis VCSA is not important it was easier to test with than a production SQL server with multiple SQL Always on nodes that all have multiple controllers and the same issue. the SQL Qluster I can get to boot no issue but the log drives DB drives and all would be able to attach no issue and see again but the UUID Of the drive woudl change rendering the SQL cluster useless. this is why I tried the VCSA first instead of a cluster node. To do this I have to build new cluster nodes in PRoxmox link them to the AO cluster and fail over to them. While this would work I'm still just not happy about the controller not be available in PROXMOX and affect how I would have to migrate 8000 VM across 82 ESX NOdes if I were to chose to move forward with Proxmox. I'll do some more testing as I need to test HA and cluster and other items to make this decision. THe one thing that works well is my veeam server so far if it a simple VM with one controller the restore works flawless.

Thanks for you help with this quesstion Im just not sure with the amount of VM we have and the complexity now of migrating to Proxmox if it would make sense for us.
 
You cannot add multiple SCSI controllers through the PVE UI. To do so, you will need to experiment with direct QEMU options via --args.

I am not sure which UUID your SQL installation is bound to, as there are several possibilities. Most of these identifiers will change once you modify almost everything about the disk configuration: controller type, driver, disk model, disk serial number, etc.

Given your scale, you may want to engage a PVE partner or, at a minimum, purchase a support subscription so you can obtain guidance directly from the developers. I'd imagine there is a significant number of support subscriptions on the line with 8000 VMs.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox