VMs in Proxmox VE with PCI Interfaces HBA/Fibre Channel Host Adapter

hymwrk

Member
Aug 12, 2021
4
0
6
28
Hi All,

I have a plan to connect some VMs in our Proxmox VE 6.4-8 to connect our SAN storage with PCI Interfaces HBA/Fibre Channel Host Adapter.

Our Proxmox:

root@proxmox:~# lspci -nn | grep -i fibre
03:00.0 Fibre Channel [0c04]: Emulex Corporation Lancer Gen6: LPe32000 Fibre Channel Host Adapter [10df:e300] (rev 01)
03:00.1 Fibre Channel [0c04]: Emulex Corporation Lancer Gen6: LPe32000 Fibre Channel Host Adapter [10df:e300] (rev 01)
root@proxmox:~#

Where in the GUI I attach the PCI Device to the VM like the pic below
Screenshot 2021-08-12 172325.png

And also, I already following this ref https://pve.proxmox.com/wiki/Pci_passthrough#Enable_the_IOMMU

But, when I hit 'Start' the VM the output VM is can't start.

So, My expectation is some VMs can running with our PCI Device (Emulex Corporation Lancer Gen6: LPe32000 Fibre Channel Host Adapter [10df:e300]).

Is it possible?

Thank you
 
Hi,

Here the log from syslog

Aug 12 14:54:52 proxmox pvedaemon[3016]: start failed: command '/usr/bin/kvm -id 117 -name director-rhosp-jk2 -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/117.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/117.pid -daemonize -smbios 'type=1,uuid=e1903d49-14ec-4463-9c0b-67b3a4d1ad22' -smp '16,sockets=4,cores=4,maxcpus=16' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga none -nographic -cpu 'host,kvm=off,+kvm_pv_eoi,+kvm_pv_unhalt' -m 32768 -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'vmgenid,guid=acffd0ef-dc96-4e7f-97d8-5799d4c6fa26' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'vfio-pci,host=0000:03:00.0,id=hostpci0.0,bus=pci.0,addr=0x10.0,x-vga=on,multifunction=on' -device 'vfio-pci,host=0000:03:00.1,id=hostpci0.1,bus=pci.0,addr=0x10.1' -device 'vfio-pci,host=0000:03:00.0,id=hostpci1.0,bus=pci.0,addr=0x11.0,x-vga=on,multifunction=on' -device 'vfio-pci,host=0000:03:00.1,id=hostpci1.1,bus=pci.0,addr=0x11.1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:58713612e2c3' -drive 'file=/var/lib/vz/template/iso/rhel-server-7.7-x86_64-dvd.iso,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/dev/pve/vm-117-disk-0,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap117i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=9E:4D:9C:AC:05:A4,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=102' -netdev 'type=tap,id=net1,ifname=tap117i1,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=F2:BC:B8:A0:B1:68,netdev=net1,bus=pci.0,addr=0x13,id=net1' -netdev 'type=tap,id=net2,ifname=tap117i2,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=66:B4:83:31:30:93,netdev=net2,bus=pci.0,addr=0x14,id=net2' -machine 'type=pc+pve0'' failed: got timeout
Aug 12 14:54:52 proxmox pvedaemon[1575]: <root@pam> end task UPID:sysadmin-1-jk2:00000BC8:0000B4E6:6114D3AB:qmstart:117:root@pam: start failed: command '/usr/bin/kvm -id 117 -name director-rhosp-jk2 -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/117.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/117.pid -daemonize -smbios 'type=1,uuid=e1903d49-14ec-4463-9c0b-67b3a4d1ad22' -smp '16,sockets=4,cores=4,maxcpus=16' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga none -nographic -cpu 'host,kvm=off,+kvm_pv_eoi,+kvm_pv_unhalt' -m 32768 -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'vmgenid,guid=acffd0ef-dc96-4e7f-97d8-5799d4c6fa26' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'vfio-pci,host=0000:03:00.0,id=hostpci0.0,bus=pci.0,addr=0x10.0,x-vga=on,multifunction=on' -device 'vfio-pci,host=0000:03:00.1,id=hostpci0.1,bus=pci.0,addr=0x10.1' -device 'vfio-pci,host=0000:03:00.0,id=hostpci1.0,bus=pci.0,addr=0x11.0,x-vga=on,multifunction=on' -device 'vfio-pci,host=0000:03:00.1,id=hostpci1.1,bus=pci.0,addr=0x11.1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:58713612e2c3' -drive 'file=/var/lib/vz/template/iso/rhel-server-7.7-x86_64-dvd.iso,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/dev/pve/vm-117-disk-0,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap117i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=9E:4D:9C:AC:05:A4,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=102' -netdev 'type=tap,id=net1,ifname=tap117i1,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=F2:BC:B8:A0:B1:68,netdev=net1,bus=pci.0,addr=0x13,id=net1' -netdev 'type=tap,id=net2,ifname=tap117i2,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=66:B4:83:31:30:93,netdev=net2,bus=pci.0,addr=0x14,id=net2' -machine 'type=pc+pve0'' failed: got timeout
 
Last edited:
Hi Mira,

Here your request:

root@sysadmin-1-jk2:~# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.114-1-pve)
pve-manager: 6.4-8 (running version: 6.4-8/185e14db)
pve-kernel-5.4: 6.4-2
pve-kernel-helper: 6.4-2
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.4-1
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.8-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.5-6
pve-cluster: 6.4-1
pve-container: 3.3-5
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.2-4
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1
root@sysadmin-1-jk2:~#

root@sysadmin-1-jk2:~# qm config 117
boot: order=scsi0;ide2;net0
cores: 4
cpu: host
hostpci0: 0000:03:00.0,pcie=1
hostpci1: 0000:03:00.1,pcie=1
ide2: local:iso/rhel-server-7.7-x86_64-dvd.iso,media=cdrom
machine: q35
memory: 32768
name: director-rhosp-jk2
net0: virtio=9E:4D:9C:AC:05:A4,bridge=vmbr473
net1: virtio=F2:BC:B8:A0:B1:68,bridge=vmbr755
net2: virtio=66:B4:83:31:30:93,bridge=vmbr752
numa: 0
ostype: l26
parent: Refresh
scsi0: local-lvm:vm-117-disk-0,size=200G
scsihw: virtio-scsi-pci
smbios1: uuid=e1903d49-14ec-4463-9c0b-67b3a4d1ad22
sockets: 4
vmgenid: acffd0ef-dc96-4e7f-97d8-5799d4c6fa26
root@sysadmin-1-jk2:~#

root@sysadmin-1-jk2:~# free -h
total used free shared buff/cache available
Mem: 188Gi 2.2Gi 186Gi 60Mi 475Mi 185Gi
Swap: 8.0Gi 0B 8.0Gi
root@sysadmin-1-jk2:~#


Following this reference https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough, I already reproduce with:

root@sysadmin-1-jk2:~# cat /etc/default/grub
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
# info -f grub -n 'Simple configuration'

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Proxmox Virtual Environment"
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
GRUB_CMDLINE_LINUX=""

. . .


root@sysadmin-1-jk2:~# cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
root@sysadmin-1-jk2:~#

root@sysadmin-1-jk2:~# update-initramfs -u -k all

And then reboot~~

root@sysadmin-1-jk2:~# dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
[ 0.010615] ACPI: DMAR 0x0000000079865FA0 0000E8 (v01 ALASKA A M I 00000001 INTL 20091013)
[ 0.943327] DMAR: IOMMU enabled
[ 1.807789] DMAR: Host address width 46
[ 1.807791] DMAR: DRHD base: 0x000000fbffc000 flags: 0x0
[ 1.807796] DMAR: dmar0: reg_base_addr fbffc000 ver 1:0 cap d2078c106f0466 ecap f020df
[ 1.807797] DMAR: DRHD base: 0x000000c7ffc000 flags: 0x1
[ 1.807800] DMAR: dmar1: reg_base_addr c7ffc000 ver 1:0 cap d2078c106f0466 ecap f020df
[ 1.807801] DMAR: RMRR base: 0x0000007bb31000 end: 0x0000007bb40fff
[ 1.807802] DMAR: ATSR flags: 0x0
[ 1.807803] DMAR: RHSA base: 0x000000c7ffc000 proximity domain: 0x0
[ 1.807804] DMAR: RHSA base: 0x000000fbffc000 proximity domain: 0x1
[ 1.807807] DMAR-IR: IOAPIC id 3 under DRHD base 0xfbffc000 IOMMU 0
[ 1.807807] DMAR-IR: IOAPIC id 1 under DRHD base 0xc7ffc000 IOMMU 1
[ 1.807808] DMAR-IR: IOAPIC id 2 under DRHD base 0xc7ffc000 IOMMU 1
[ 1.807809] DMAR-IR: HPET id 0 under DRHD base 0xc7ffc000
[ 1.807810] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[ 1.808568] DMAR-IR: Enabled IRQ remapping in x2apic mode
[ 3.083772] DMAR: dmar1: Using Queued invalidation
[ 3.108312] DMAR: Intel(R) Virtualization Technology for Directed I/O
root@sysadmin-1-jk2:~#

root@sysadmin-1-jk2:~# find /sys/kernel/iommu_groups/ -type l

. . .
/sys/kernel/iommu_groups/46/devices/0000:03:00.0
/sys/kernel/iommu_groups/46/devices/0000:03:00.1
. . .


root@sysadmin-1-jk2:~# lspci -nnk | grep -i fibre
03:00.0 Fibre Channel [0c04]: Emulex Corporation Lancer Gen6: LPe32000 Fibre Channel Host Adapter [10df:e300] (rev 01)
Subsystem: Emulex Corporation Lancer Gen6: LPe32000 Fibre Channel Host Adapter [10df:e332]
03:00.1 Fibre Channel [0c04]: Emulex Corporation Lancer Gen6: LPe32000 Fibre Channel Host Adapter [10df:e300] (rev 01)
Subsystem: Emulex Corporation Lancer Gen6: LPe32000 Fibre Channel Host Adapter [10df:e332]




The VM already attach and also I believe the memory is still enough and has more space. But I don't know why the VM can't run start.

Is my configuration having a miss?

Thank you
 
With PCI passthrough the memory has to be allocated on start, which might take longer than the timeout.
Does it work if you run it with the command produced by qm showcmd --pretty 117?
 
Hi Mira,

It working with new VMs. If from the existing VM it not working.

Thank you for helping. I really appreciate it.

Thank you
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!