I am trying to pass through the 2 onboard SATA controllers to VMs. Proxmox itself runs on an NVMe SSD and does not use those controllers.
Both controllers have their own IOMMU group
What I did so far
Which results in
I've one SATA disk currently attached, and Proxmox does not list it anymore under Disks.
The VM:
Starting the VM is hanging and I can find those logs:
Soon later Proxmox itself becomes unresponsive and I have to hard reset the server...
Both controllers have their own IOMMU group
Code:
IOMMU Group 28 83:00.0 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)
IOMMU Group 29 84:00.0 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)
What I did so far
Code:
root@server:~# more /etc/modules
amd_iommu_v
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
Code:
root@server:~# more /etc/modprobe.d/vfio.conf
options vfio-pci ids=1022:7901
softdep ahci pre: vfio-pci
Code:
root@server:~# more /etc/modprobe.d/blacklist.conf
blacklist ahci
Which results in
Code:
root@server:~# lspci -k -s 83:00
83:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
Subsystem: Gigabyte Technology Co., Ltd FCH SATA Controller [AHCI mode]
Kernel driver in use: vfio-pci
Kernel modules: ahci
root@server:~# lspci -k -s 84:00
84:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
Subsystem: Gigabyte Technology Co., Ltd FCH SATA Controller [AHCI mode]
Kernel driver in use: vfio-pci
Kernel modules: ahci
I've one SATA disk currently attached, and Proxmox does not list it anymore under Disks.
The VM:
Code:
root@server:~# more /etc/pve/qemu-server/100.conf
bios: ovmf
boot: order=scsi0
cores: 8
cpu: EPYC-Rome,flags=+aes
efidisk0: local-lvm:vm-100-disk-0,efitype=4m,size=4M
hostpci0: 0000:83:00.0,rombar=0
hostpci1: 0000:84:00.0,rombar=0
machine: q35
memory: 12288
meta: creation-qemu=6.2.0,ctime=1662755808
name: truenas
net0: virtio=5A:A2:90:3C:C7:0B,bridge=vmbr0
net1: virtio=7A:B4:AD:7C:C8:F7,bridge=vmbr1,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-100-disk-1,size=32G
scsi1: local-lvm:vm-100-disk-2,size=4G
scsi2: local-lvm:vm-100-disk-3,size=12G
scsi21: local-lvm:vm-100-disk-4,size=6G
scsi22: local-lvm:vm-100-disk-5,size=6G
scsi23: local-lvm:vm-100-disk-6,size=6G
scsi24: local-lvm:vm-100-disk-9,size=6G
scsi25: local-lvm:vm-100-disk-7,size=4G
scsihw: virtio-scsi-pci
smbios1: uuid=4585d1cc-0603-496b-9e7e-803418c40743
sockets: 1
Starting the VM is hanging and I can find those logs:
Code:
[ 651.761767] vfio-pci 0000:83:00.0: not ready 1023ms after FLR; waiting
[ 653.809887] vfio-pci 0000:83:00.0: not ready 2047ms after FLR; waiting
[ 657.105863] vfio-pci 0000:83:00.0: not ready 4095ms after FLR; waiting
Soon later Proxmox itself becomes unresponsive and I have to hard reset the server...
Code:
Message from syslogd@server at Sep 22 14:52:18 ...
kernel:[ 752.435321] watchdog: BUG: soft lockup - CPU#3 stuck for 26s! [task UPID:serve:3913]
Last edited: