sata disk passthrough with SMART functionality

diversity

Active Member
Feb 19, 2020
238
15
38
54
LS,

I have 3 WD Red Sata drives in pve 6.1 that I would like to pass through to a VM. Please see images below.

In the UEFI settings of mobo ASrock x570 Creator I have set the sata mode to AHCI and not RAID

Onboard sata controller is ASMedia ASM1061 and the disks are connected to that controller having a device ID that is not listed when I want to pass through the sata controller.

No problem I'll pass through the disks then using
qm set 101 -scsix /dev/sdax
scsihw = VirtIO scsi

But it looks as if the disks are not being passed through because SMART functionality is not available on the disks in the VM and that is a requirement for what I am trying to accomplish.

Not sure if this has anything to do with it but the disks are connectect from the mobo via sata cables to a hotswap back with a BPN-SAS-733TQ backpane.

How can I pass through the disks directly keeping SMART functionality available?

Kind regards,


1582208546398.png

1582208997691.png
 
  • Like
Reactions: TheRealMaN_
You are doing a LUN-Passthrough, not a physical device-pass through.
This is the reason why smart is not available.
You could use a dedicated HDD-adapter (HBA) and pass that through with PCIe-passthrough. That would expose SMART as well.

If I may: why do you need SMART at all?
It is active on the host and can alert you from there...

/e: you should avoid using /dev/sdX when doing that type of configuration.
these names can change. Better use the links in /dev/disk/by-id
 
  • Like
Reactions: TheRealMaN_
Thank you @tburger for your response.

Does your suggestion regarding using a dedicated HBA suggest there is no such thing as on board controller pass through? And if so then does that also hold true for other (type 1) hypervisors? Also, is there no such thing as direct disk passthrough regardless of hypervisor?

Among other requirements, like locking (reserving ECC memory) which I will probably ask about in a future post should I get past this hurdle, I need to give FreeNAS full access to the disks to make it able to do ZFS monitoring ,housekeeping and alerting with minimal deep dive scripting that is intimidating for newcommers like myself. Why FreeNAS? Because i am at ease with that GUI and am confident about the system and it has proven itself over the years I am using it.

As I am new to pve and have not yet heard much good or negative about it I am a bit hesitant.

Please forgive my skepticism and ignorance but as far as I understand it achieving the same basic level of configurable ZFS management and monitoring/alerting on pve is a really involved process with a steep learning curve. I hope to be mistaken but in case I am not then perhaps this is something the devs could think about for their roadmap.

I am trying out pve because I ran into a problem:
I was trying to combine too many goals with one machine and ended buying the following:
casing: supermicro-superchassis-733tq-668b
casing psu: 668 W in casing
casing SAS/SATA controller: CSE-SAS733TQ (BPN-SAS-733TQ )
mobo: asrock-x570-creator
cpu: amd-ryzen-9-3950x (16 core : 32 thread)
ssd: pny-xlr8-cs3030-1tb
3 x storage: wd-red-(64mb-cache)-3tb
monitor: hkc-24s1-black (looking to get 2 more and have the onboard video also do some work)
4 x ECC memory: crucial-ct16g4rfd8266 (64GB total)
gpu: amd radeon HD 7470 (2 outputs + 1 onboard of mobo should theoretically be ablte to feed 3 monitors)

I'd like my ZFS storage to be my workstation at the same time ;) And there lies the challenge.
How to have the confidence for my storage like FreeNAS can give while having a desktop environment on the same machine where I can work on on different operating systems. So hence I am giving pve a try.

I am intentionally not using device id's so that I can hotswap one if one fails with minimal reconfiguration. At least that is what I have grown accustomed to using FreeNAS and I am hoping that having pve as an hypervisor will not change that.
Or am I missing something obvious only a newcomer like myself can overlook?

Kind regards
 
Wow. That's a detailed answer ;)

I don't think there is an option to pass through individual SATA ports and you are not missing anything / doing anything wrong. Maybe a "wrong" expectation...
Typcially the onboard-controller is part of the south-bridge. That means if you'd pass that part through, you wouldn't have any controller left for "the other stuff".

You can get PCIe-HBAs for small money (compared to what you have already invested) for roughly 25-30€ on Ebay. Try to find LSI9211-4i or 8i or the "brothers and sisters" of that card. There is a huge family out there (including Dell-, Fujitsu-, IBM-OEM). I personally have great experiences with the LSI 9211-8i and the Dell H310 (cross flashed).

I wouldn't be surprised if you are running into some challenge with your planned GPU setup as well. I'd expect the PCIe card to work in passthrough (thats the same approach as with the HBA), but the onboard card might be a challenge ...

Hypervisors in between the hardware and the OS bring their "difficulties", but also great value. And to me Proxmox is a great option alongside the other proprietary ones.
 
Thx again @tburger

I've got good news.

I am now able to pass through the on board controller (that was previously missing as an option to add as PCI device) to the VM.

In my case it was a mobo setting that needed altering (PCIe ARI support was by default disabled and I had set to enabled)up

I removed the bogus direct disk passthrough to the VM and added the pci device that now finally showed up in the list.

To my overjoy I now seem to have direct access to the disks in the VM because of SMART functionality.

And now comes the sad part :( Freenas was quick to tell me that there are issues with the disks. So to make sure it is not a virtualization issue I am now going bare bone HD testing. I have not done so before installing them and that is something I recommend newcomers to avoid.

I might have overlooked to set ACS in the mobo setting. I'll try it later if need be.
 
Last edited:
Just a hint before you chase a gohst:
This also could be an interop-issue with the relatively new Hardware you are using.
I do recall that Freenas also recommends using HBAs with "modern" LSI Chipsets.
Good luck!
 
Outstanding. I had given up on proxmox earlier only to find myself back in the game.

After some tinkering with the mobo settings related to ACS I am seeing error free disks with SMART.

This is soooo encouraging.

Next step is memory for freenas VM and later audio and video for windows 10 VM.

I am stikking a while longer with proxmox ;) My appolgies if I sounded negative. it is ignorance more than anything else
 
@diversity , could you share how you mapped your drives to the actual PCI devices, or did you just pass through all PCI SATA controllers?

I have a similar use case on the same chipset, so your feedback would be very valuable :)
 
I passed through all SATA controllers yes. I did not tinker with mappings. Not sure we can even do that actually.
 
@diversity : Ok, good to know. I see that mine are in IOMMU groups together with some PCIe bridges, so I'll probably need to patch the kernel to separate them. I'll try this once I have more time on my hands.

I also have a PCIe SATA controller card, like @tburger suggested , but when I plug it in, Proxmox won't boot anymore. It just halts at the following line:

vfio-pci 0000.0b:00.0: vgaarb: changed VGA decodes sata controller

, but no clue why... I've used the PCIe SATA controller card before in combination with GPU passthrough (on x79).

So currently I'm stuck with LUN passthrough. Hotswapping isn't a requirement in my case (literally - the case just doesn't allow it in any safe way). Any other drawbacks you see in this setup?
 
@diversity : thank you :D The controllers are now in separate groups!

The ZFS array doesn't have any data on it yet, so I'll retry the intended setup and share the results.

Perhaps I should also share the hardware, for anyone else following:
  • Casing: NZXT H710i
  • PSU: Corsair M850X (850 W)
  • [ Storage Controller: Delock 89395 - 4x 6GB/s SATA ] <= currently not connected because of issue mentioned earlier
  • Mobo: Asus ROG Crosshair VIII x570-hero (with bt/wifi)
  • CPU: AMD Ryzen 9 3950x (16 core : 32 thread)
  • SSD: Crucial P1 1TB (NVMe)
  • 4 x HDD: Seagate Barracuda 4TB (8TB usable in ZFS striped mirrors, similar to RAID 10)
  • 4 x Mem: Corsair Vengeance Pro RGB 16GB (64GB total)
  • GPU 1: Nvidia RTX 2070 Super (Asus)
  • GPU 2: AMD Radeon RX 580 OC (MSI)
 
Success! I've managed to pass through the controller.

Note/warning/headache prevention:

When adding all 4 or the listed SATA controllers to the VM, the VM took a long time to start and eventually resulted in the host being locked up and reboot. During the reboot my memory reminded me that the VM was already configured in autostart and that both of my GPUs are excluded at the initramfs level, so... :rolleyes:

After a game of refreshing the Proxmox web UI during the loop, quickly entering the shell, tweaking the guest's conf file and rebooting, I was back in business.

This makes me wonder: is there some option I could enable in the GRUB boot line that prevents VMs to autostart - for scenario's like this?​
Other notes:​
  • Setting the VM 'start at boot' option in the UI didn't work because of a file lock.
  • If you still have a GPU available, you're probably far better off working directly on the TTY, but the timeframe to do it is still very limited.


Continuing on the actual passthrough:

Using the following command in the host, I managed to detect the required controller (by its PCIe address) for my HDD's:​
ls -la /dev/disk/by-path/
When adding the single controller, the disks didn't show up in FreeNAS, so I remembered that (for GPU passthrough use cases) the guest requires UEFI instead of SeaBIOS. Reinstalling FreeNAS is only a matter of 3minutes, so I added the EFI disk, reinstalled the OS and finally the disks appeared.​

@diversity : thanks again for the push in the right direction :)
 
That is great news! I am sorry I could not really analyze your scenario as I am kind of a newcomer to this field. So I am happy that I at least could give you some directions.
 
I'm interested in this (I currently monitor smart on the host).

My question is how are you managing to keep the host disk in the host if you're passing through the whole controller?

Edit: I see the host uses an nvm drive.
 
@sshaikh : In our cases we indeed use an NVMe drive for the host. However, you could also check whether or not all your SATA ports are on the same controller in the same IOMMU group. If not, then you can probably pass through some ports instead of all of them.

Other options which I can think of:
- Add a PCIe SATA controller, so the motherboard SATA ports can be reserved for the host;
- Use a minimal USB boot drive (+ containing also the image for the NAS), set the NAS to boot on startup, mount an iSCSI drive from the NAS in Proxmox.
 
  • Like
Reactions: sshaikh

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!