Can I use every adapter with ASM1166 chip?And use a M.2 to SATA adapter in an M.2 slot connected to the CPU for passthrough this to an TrueNAS VM.
I read, that using cards with muliplicator are not good .....
Can I use every adapter with ASM1166 chip?And use a M.2 to SATA adapter in an M.2 slot connected to the CPU for passthrough this to an TrueNAS VM.
Is this separate from the following issue?
libata.force=nolpm
Are other chips better than ASM1166?The ASM1166 has some quirks in its firmware, and configuring the ROMBAR on the PVE9 is quite inconvenient.
Additionally, I believe the stability of NGFF M.2 SATA is low.
In other forums I see, that on newer motherboards there is an option "ACS enable" in BIOS.Yes, but unfortunately, many devices of the big chipset group are accessible-from/have-access-to the Proxmox host (like the network, drive and USB controllers and more). Your VM can, in principle, read all of the host memory (and therefore all of the other VMs) and steal passwords and other data without you knowing it.
ACS needs to be enabled to give you IOMMU groups. You don't need to patch the Proxmox kernel. Using the pcie_acs_override is what makes it unsafe.In other forums I see, that on newer motherboards there is an option "ACS enable" in BIOS.
Is enabling ACS in BIOS as unsecure as patching the kernel with the acs patch?
This is to do to get IOMMU groups.ACS needs to be enabled to give you IOMMU groups. You don't need to patch the Proxmox kernel.
You always get this because devices connected to/via B550 are not properly isolated.And in most cases you get a big IOMMU-chipset group with B550 boards.
You are not patching the kernel. You are enabling the "break the groups" that is already in the Proxmox kernel.Then you can brake this isolation an become virualized seperate IOMMU groups for the chipset group by patching the kernel with:
"quiet iommu=pt pcie_acs_override=downstream,multifunction"
Is this correct?
Thank you, that I don't know.You are not patching the kernel. You are enabling the "break the groups" that is already in the Proxmox kernel.
Thank you, that is clear to me. Even though I find it very difficult to accurately assess the risk for my specific use case.his is unsafe because it makes it look like devices are isolated but they are not really.
In principle software inside the VM can use the PCI(e) devices (passed through to the VM) to potentially read the all memory of the Proxmox host (via the devices still connected to the host).Even though I find it very difficult to accurately assess the risk for my specific use case.
In my opinion it is very good, that in this forum the mebers tell us, that it is risky.
In other forums the ACS patch is used lik a tutorial![]()
One last question.In principle software inside the VM can use the PCI(e) devices (passed through to the VM) to potentially read the all memory of the Proxmox host (via the devices still connected to the host).
Whether this actually works to read the host memory from inside the VM via DMA might depend on a lot of things.
Yes.This only applies if the ACS patch is active, right?
I cannot guarantee that there is no risk, as passing real hardware to a VM can interfere with the host. But without the pcie_acs_override, IOMMU should protect you from devices secretly communicating (and reading memory).If you passthrough a GPU to a VM using the standard IOMMU groups (without patching), then there's no risk. Right?
Sure, every system could have security issues.I cannot guarantee that there is no risk, as passing real hardware to a VM can interfere with the host.
We use essential cookies to make this site work, and optional cookies to enhance your experience.