Raid card passthrough blacklisted - Still seeing drive access in Dmesg

Phuket My Mac

Member
Aug 4, 2019
12
1
8
44
Hi,

I have a running FreeNAS VM on my proxmox 6.2 Server running on a Dell R320.
FreeNAS is running well however I'm seeing error messages in the log console about the Host server trying to get access to the drives on the Raid card.
The card is configured in HBA for FreeNAS to work properly.

I have added in /etc/modprobe.d a file called raid.conf containing the following lines:

options vfio_iommu_type1 allow_unsafe_interrupts=1
options vfio-pci ids=1000:0072
blacklist mpt3sas

Using lspci -nn I am getting this regarding the Raid controller:

01:00.0 Serial Attached SCSI controller [0107]: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] [1000:0072] (rev 03)

Using lspci -kk I am getting this regarding the Raid controller:

01:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
Subsystem: Dell SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]
Kernel driver in use: vfio-pci
Kernel modules: mpt3sas

Here are the lines I am getting in the console:


Code:
[  484.830848] sd 0:0:1:0: [sdb] tag#2069 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[  484.830855] sd 0:0:1:0: [sdb] tag#2069 CDB: Read(16) 88 00 00 00 00 00 00 00 00 80 00 00 01 00 00 00
[  484.830859] blk_update_request: I/O error, dev sdb, sector 128 op 0x0:(READ) flags 0x0 phys_seg 5 prio class 0
[  484.831536] sd 0:0:1:0: [sdb] tag#824 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[  484.832354] sd 0:0:1:0: [sdb] tag#824 CDB: Read(16) 88 00 00 00 00 00 00 40 00 80 00 00 01 00 00 00
[  484.832359] blk_update_request: I/O error, dev sdb, sector 4194432 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0
[  484.832491] sd 0:0:2:0: [sdc] tag#2070 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[  484.833800] sd 0:0:2:0: [sdc] tag#2070 CDB: Read(16) 88 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00
[  484.833803] blk_update_request: I/O error, dev sdc, sector 0 op 0x0:(READ) flags 0x0 phys_seg 3 prio class 0
[  484.834931] sd 0:0:2:0: [sdc] tag#2571 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[  484.835254] sd 0:0:2:0: [sdc] tag#2571 CDB: Read(16) 88 00 00 00 00 00 00 00 00 80 00 00 01 00 00 00
[  484.835259] blk_update_request: I/O error, dev sdc, sector 128 op 0x0:(READ) flags 0x0 phys_seg 32 prio class 0
[  484.835695] sd 0:0:2:0: [sdc] tag#396 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[  484.836727] sd 0:0:2:0: [sdc] tag#396 CDB: Read(16) 88 00 00 00 00 00 00 40 00 80 00 00 01 00 00 00
[  484.836734] blk_update_request: I/O error, dev sdc, sector 4194432 op 0x0:(READ) flags 0x0 phys_seg 32 prio class 0
[  484.836885] sd 0:0:3:0: [sdd] tag#2572 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[  484.838298] sd 0:0:3:0: [sdd] tag#825 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[  484.838305] sd 0:0:3:0: [sdd] tag#825 CDB: Read(16) 88 00 00 00 00 00 00 00 00 80 00 00 01 00 00 00
[  484.838311] blk_update_request: I/O error, dev sdd, sector 128 op 0x0:(READ) flags 0x0 phys_seg 31 prio class 0
[  484.838502] sd 0:0:3:0: [sdd] tag#826 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[  484.838508] sd 0:0:3:0: [sdd] tag#826 CDB: Read(16) 88 00 00 00 00 00 00 40 00 80 00 00 01 00 00 00
[  484.838513] blk_update_request: I/O error, dev sdd, sector 4194432 op 0x0:(READ) flags 0x0 phys_seg 32 prio class 0
[  484.838735] sd 0:0:4:0: [sde] tag#827 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[  484.838741] sd 0:0:4:0: [sde] tag#827 CDB: Read(16) 88 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00
[  484.838746] blk_update_request: I/O error, dev sde, sector 0 op 0x0:(READ) flags 0x0 phys_seg 32 prio class 0
[  484.838930] blk_update_request: I/O error, dev sde, sector 128 op 0x0:(READ) flags 0x0 phys_seg 31 prio class 0
[  484.845361] sd 0:0:3:0: [sdd] tag#2572 CDB: Read(16) 88 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00
[  485.148866] INFO: task kworker/u96:0:8 blocked for more than 362 seconds.
[  485.150370]       Tainted: P           OE     5.4.41-1-pve #1
[  485.151863] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  485.153415] kworker/u96:0   D    0     8      2 0x80004000
[  485.153426] Workqueue: events_unbound async_run_entry_fn
[  485.153429] Call Trace:
[  485.153441]  __schedule+0x2e6/0x6f0
[  485.153450]  schedule+0x33/0xa0
[  485.153458]  schedule_preempt_disabled+0xe/0x10
[  485.153466]  __mutex_lock.isra.10+0x2c9/0x4c0
[  485.153477]  ? kobject_uevent_env+0x13c/0x7b0
[  485.153484]  __mutex_lock_slowpath+0x13/0x20
[  485.153487]  mutex_lock+0x2c/0x30
[  485.153492]  device_add+0x455/0x670
[  485.153498]  scsi_sysfs_add_sdev+0x1be/0x280
[  485.153502]  do_scan_async+0x94/0x140
[  485.153506]  async_run_entry_fn+0x3c/0x150
[  485.153510]  process_one_work+0x20f/0x3d0
[  485.153518]  worker_thread+0x34/0x400
[  485.153523]  kthread+0x120/0x140
[  485.153526]  ? process_one_work+0x3d0/0x3d0
[  485.153529]  ? kthread_park+0x90/0x90
[  485.153533]  ret_from_fork+0x35/0x40
[  485.153568] INFO: task systemd-udevd:610 blocked for more than 362 seconds.
[  485.155105]       Tainted: P           OE     5.4.41-1-pve #1
[  485.156644] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  485.158231] systemd-udevd   D    0   610    572 0x80004324
[  485.158243] Call Trace:
[  485.158249]  __schedule+0x2e6/0x6f0
[  485.158256]  schedule+0x33/0xa0
[  485.158260]  async_synchronize_cookie_domain+0xb3/0x140
[  485.158266]  ? wait_woken+0x80/0x80
[  485.158270]  async_synchronize_full+0x17/0x20
[  485.158275]  do_init_module+0x1b5/0x230
[  485.158277]  load_module+0x22ec/0x2570
[  485.158284]  __do_sys_finit_module+0xbd/0x120
[  485.158295]  ? __do_sys_finit_module+0xbd/0x120
[  485.158303]  __x64_sys_finit_module+0x1a/0x20
[  485.158312]  do_syscall_64+0x57/0x190
[  485.158318]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  485.158322] RIP: 0033:0x7f0d6d0def59
[  485.158329] Code: Bad RIP value.
[  485.158331] RSP: 002b:00007fff60f5d2a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
[  485.158334] RAX: ffffffffffffffda RBX: 00005650fa9057b0 RCX: 00007f0d6d0def59
[  485.158338] RDX: 0000000000000000 RSI: 00007f0d6cfe3cad RDI: 000000000000000f
[  485.158344] RBP: 00007f0d6cfe3cad R08: 0000000000000000 R09: 0000000000000000
[  485.158348] R10: 000000000000000f R11: 0000000000000246 R12: 0000000000000000
[  485.158351] R13: 00005650fa901ba0 R14: 0000000000020000 R15: 00005650fa9057b0
[  485.158374] INFO: task task UPID:pve2::1334 blocked for more than 241 seconds.
[  485.159959]       Tainted: P           OE     5.4.41-1-pve #1
[  485.161572] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  485.163193] task UPID:pve2: D    0  1334   1333 0x00004000
[  485.163196] Call Trace:
[  485.163202]  __schedule+0x2e6/0x6f0
[  485.163206]  ? kmem_cache_free+0x293/0x2b0
[  485.163210]  schedule+0x33/0xa0
[  485.163214]  schedule_preempt_disabled+0xe/0x10
[  485.163218]  __mutex_lock.isra.10+0x2c9/0x4c0
[  485.163224]  ? kernfs_find_ns+0x5e/0xd0
[  485.163228]  __mutex_lock_slowpath+0x13/0x20
[  485.163232]  mutex_lock+0x2c/0x30
[  485.163235]  device_del+0xd0/0x370
[  485.163240]  ? kobject_put+0x9e/0x1a0
[  485.163244]  device_unregister+0x1a/0x60
[  485.163248]  __scsi_remove_device+0x10d/0x150
[  485.163252]  scsi_remove_device+0x26/0x40
[  485.163255]  scsi_remove_target+0x17b/0x1d0
[  485.163265]  sas_rphy_remove+0x59/0x60 [scsi_transport_sas]
[  485.163270]  sas_port_delete+0x2d/0x150 [scsi_transport_sas]
[  485.163275]  ? sas_port_delete+0x150/0x150 [scsi_transport_sas]
[  485.163280]  do_sas_phy_delete+0x3c/0x40 [scsi_transport_sas]
[  485.163283]  device_for_each_child+0x59/0x90
[  485.163288]  sas_remove_children+0x1b/0x40 [scsi_transport_sas]
[  485.163293]  sas_remove_host+0x19/0x30 [scsi_transport_sas]
[  485.163308]  scsih_remove+0xc4/0x2c0 [mpt3sas]
[  485.163313]  pci_device_remove+0x3e/0xc0
[  485.163317]  device_release_driver_internal+0xec/0x1c0
[  485.163320]  device_driver_detach+0x14/0x20
[  485.163323]  unbind_store+0xf9/0x130
[  485.163327]  drv_attr_store+0x27/0x40
[  485.163332]  sysfs_kf_write+0x3b/0x40
[  485.163335]  kernfs_fop_write+0xda/0x1c0
[  485.163339]  __vfs_write+0x1b/0x40
[  485.163342]  vfs_write+0xab/0x1b0
[  485.163347]  ksys_write+0x61/0xe0
[  485.163350]  __x64_sys_write+0x1a/0x20
[  485.163354]  do_syscall_64+0x57/0x190
[  485.163357]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  485.163359] RIP: 0033:0x7fb6f321f471
[  485.163364] Code: Bad RIP value.
[  485.163366] RSP: 002b:00007ffd3d7af438 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[  485.163368] RAX: ffffffffffffffda RBX: 000055b048c58260 RCX: 00007fb6f321f471
[  485.163369] RDX: 000000000000000c RSI: 000055b04eff48c0 RDI: 000000000000000e
[  485.163371] RBP: 000055b04eff48c0 R08: 0000000000000000 R09: aaaaaaaaaaaaaaab
[  485.163372] R10: 000055b04efe0ce0 R11: 0000000000000246 R12: 000000000000000c
[  485.163373] R13: 000055b048c58260 R14: 000000000000000e R15: 000055b04eff4630
[  495.208681] scsi_io_completion_action: 32 callbacks suppressed
[  495.208690] sd 0:0:1:0: [sdb] tag#798 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[  495.208696] sd 0:0:1:0: [sdb] tag#798 CDB: Read(16) 88 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00
[  495.208699] print_req_error: 32 callbacks suppressed
[  495.208702] blk_update_request: I/O error, dev sdb, sector 0 op 0x0:(READ) flags 0x0 phys_seg 4 prio class 0
[  495.210757] sd 0:0:1:0: [sdb] tag#1881 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[  495.210765] sd 0:0:1:0: [sdb] tag#1881 CDB: Read(16) 88 00 00 00 00 00 00 00 00 80 00 00 01 00 00 00

Is that normal to see those lines or am I missing something?
 
if you correctly blacklisted the card, there should not be any disks from that controller at all
did you maybe forget to do 'update-grub' after editing the files in modprobe.d ?

you can check if thats the case if you reboot your host and do an 'lspic -k' before you start the vm
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!