Proxmox 6 configures each disk on its own bus when using the Virtio SCSI single controller. However, while the scsi0 disk is on SCSI ID 0 LUN 0 as one would expect, the scsi1 disk is on SCSI ID 0 LUN 1, the scsi2 disk is on SCSI ID 0 LUN 2, and so forth. Notice how the LUN number is matching the disk number.
Qemu, when configured with a disk on a LUN higher than 0 and with nothing configured on LUN 0 of the same target, reports LUN 0 with flags NOT PRESENT and NO DEVICE, resulting in the disk not being attached on NetBSD and OpenBSD. On NetBSD this is because the SCSI subsystem will stop scanning for further LUNs on the target when a LUN reports back with either of those flags. The reason on OpenBSD is likely the same. (FreeBSD appears to always scan up to 8 LUNs.)
SCSI hardware is always required to implement LUN 0 to at least report the number of LUNs on the target. It is unusual that there would be gaps in the LUN numbering of available (and present) LUNs. Thus there may well be other SCSI subsystems as well that do not readily deal with these targets as configured by Proxmox.
It would be better if each SCSI target was always configured starting at LUN 0. This way the targets in Qemu would look like what one would expect in real SCSI hardware.
The relevant part of the Qemu configuration comes from the print_drivedevice_full subroutine in /usr/share/perl5/PVE/QemuServer.pm. I found that it already has $unit calculated as a value that resets to zero for each bus. As the code only creates SCSI ID 0, we could use $unit instead of $drive->{index} as the LUN number. (The existing code is using $unit as the SCSI ID on lsi controllers, where targets don't get a LUN number assigned at all.)
So I replaced lun=$drive->{index} with lun=$unit on line 1426 of /usr/share/perl5/PVE/QemuServer.pm and now all the targets are created on SCSI ID 0 LUN 0, as preferable, when using the Virtio SCSI single controller. (When using the Virtio SCSI controller, the LUN number is incremented like before for the first controller, but should one have enough disks for a second controller to be created, that one would then start over from 0 as well.)
I'm hoping that this change could be included in a future release of Proxmox.
Qemu, when configured with a disk on a LUN higher than 0 and with nothing configured on LUN 0 of the same target, reports LUN 0 with flags NOT PRESENT and NO DEVICE, resulting in the disk not being attached on NetBSD and OpenBSD. On NetBSD this is because the SCSI subsystem will stop scanning for further LUNs on the target when a LUN reports back with either of those flags. The reason on OpenBSD is likely the same. (FreeBSD appears to always scan up to 8 LUNs.)
SCSI hardware is always required to implement LUN 0 to at least report the number of LUNs on the target. It is unusual that there would be gaps in the LUN numbering of available (and present) LUNs. Thus there may well be other SCSI subsystems as well that do not readily deal with these targets as configured by Proxmox.
It would be better if each SCSI target was always configured starting at LUN 0. This way the targets in Qemu would look like what one would expect in real SCSI hardware.
The relevant part of the Qemu configuration comes from the print_drivedevice_full subroutine in /usr/share/perl5/PVE/QemuServer.pm. I found that it already has $unit calculated as a value that resets to zero for each bus. As the code only creates SCSI ID 0, we could use $unit instead of $drive->{index} as the LUN number. (The existing code is using $unit as the SCSI ID on lsi controllers, where targets don't get a LUN number assigned at all.)
So I replaced lun=$drive->{index} with lun=$unit on line 1426 of /usr/share/perl5/PVE/QemuServer.pm and now all the targets are created on SCSI ID 0 LUN 0, as preferable, when using the Virtio SCSI single controller. (When using the Virtio SCSI controller, the LUN number is incremented like before for the first controller, but should one have enough disks for a second controller to be created, that one would then start over from 0 as well.)
I'm hoping that this change could be included in a future release of Proxmox.
Last edited: