HBA - PCIe pass-through HP380g9 w/ P840 card for Xpenology "works" but pegs hosts processor.

vetvetter

New Member
Apr 23, 2024
5
0
1
Hello everyone,

Over the last bit I have been working on setting up my Proxmox 8.1.4 cluster. I will be running 2 nodes with an extra eventual witness disk likely on a raspberry pi. Been working through the PCIe pass-through for NVME's as well as in this case HBA to Xpenology.

I have been able to work through all of the issues to in fact get this working. NVME's pass-though works great on both current nodes without issue. Boxes work great and speed seems to be around what I would expect.

The problem comes in when I try to passthrough my P840 card in HBA mode. I have the card blocked ID as well as the HPSA drive via config files in /etc/modprobe.d/.

I also ran # update-initramfs -u -k all and rebooted the node. At that point Proxmox no longer saw the drives anymore which is the desired result. I spun up a new Xpenology VM with correct perams plus P840 HBA pass-through and upon boot was going REEEEEEALLLY slow. Then it would just hang on boot. Host was noticibly slugish and CPU + Mem on the VM seemed pretty pegged.

I got everything shut down tried a few things and found some articles about turning off memory balloning for the VM as well as turning off rombar=0 via uncheck box inside of the PCIe passthrough via VM settings gui. All of the settings in a fresh created VM just to be on the safe side.
At this point the VM would at least boot up and I could go throug the Arc Loader Xpenology setup. Problem is it did seem sluggish again but I was able to see my drives passed through OK inside of DSM. But the box was REEEEALLLY slow again.
Turning off the VM doesn't seem to work gracefully or not gracefully. I could not get out of the box flogging the CPU until I rebooted the node. Then it was fine until I tried the HBA passthough again.

I think at this point I might just pass the drives through themselves rather then the whole HBA. That does seem to work fine. I just wanted to get the best performance possible thus trying the HBA pass-through. In this case it seemed to me quite the opposite!

Wanted to see if anyone has seen anything like this before. I am going to post my # TOP results if it helps. This was with the pass-through VM off after it had been on. Clearly something was flogging the CPU still:

top - 22:49:19 up 2:06, 1 user, load average: 1.07, 2.15, 2.25
Tasks: 885 total, 1 running, 884 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.9 us, 1.2 sy, 0.0 ni, 97.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 257781.5 total, 252149.7 free, 5959.6 used, 1246.1 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 251821.8 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1923 root 20 0 171748 102688 5376 S 45.3 0.0 17:45.56 pvestatd
25927 root 20 0 11744 4928 2688 R 16.4 0.0 0:25.27 top
1892 root rt 0 557328 163984 52432 S 14.3 0.1 13:56.49 corosync
25868 www-data 20 0 255428 144404 7168 S 3.0 0.1 0:03.47 pveproxy worker
7290 root 20 0 0 0 0 I 2.7 0.0 0:05.73 kworker/22:2-events
17 root 20 0 0 0 0 I 1.6 0.0 1:12.53 rcu_preempt
1814 root 20 0 631516 58448 52588 S 0.8 0.0 1:49.80 pmxcfs

 
Also these are pretty beefy nodes. Dual e5-2660v4 / 256gb ram in each. Booting from USB attached NVME drives. Seperate NVME drives for pass-through and some large spinners.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!