HDD not appearing in TrueNAS VM

1nsamity

New Member
Jan 4, 2023
6
0
1
Hi,

I recently just completed building my home server and was trying to setup a TrueNAS VM. The only problem that I'm having is that the HDDs are not appearing within the TrueNAS installation setup. When I go back to proxmox I can see all three disks. I was wondering if there was a configuration step or issue I might've of missed. Here are some things I did before setting up the VM:
1) Wiped all three disks and initialized them with GPT
2) When creating the TrueNAS VM, I switched the machine under the "System" tab to "q35"
3) All three disks aren't connected to an HBA but directly to the SATA ports on my motherboard
4) System Specs: Ryzen 5600G, Asrock B550M-ITX/ac, 16GB DDR4, 256GB TeamGroup NVME SSD (Boot Drive), 3x WD Red Pro NAS HDDs FFBX.
I'm open to switching to a different NAS option if that resolves my issue. Sorry if this questions has already been asked as I couldn't find it.
 
Hi leesteken, I'm not too sure what exactly to find the VM configuration file. Are you asking for IOMMU because I get this message when I SSH into proxmox and check the "dmesg" 1672861675643.png
or is it the options menu for the VM?
1672861730246.png

please let me know if this helps or if I need more information!
 
Please show the output of this console/shell command: cat /etc/pve/qemu-server/100.conf. It's not clear to me how the VM should see your physical disks connected to your motherboards SATA ports.
 
I'm not even sure if he understands that VMs can't make use of physical hardware of the host without PCI passthrough. Sounded more like he thinks "when PVE can see the disks, the TrueNAS VM should see them too".
 
You have not "passed through" or "shared" the physical disks with the VMl; that's the reason they do not show up inside the VM.
The idea of virtualizing systems is to uncouple the systems from your actual hardware (so they can run anywhere and even move between Proxmox hosts in a cluster).
Please investigate the options for sharing actual hardware with VMs (as linked in my previous reaction) and decide if that's what you want, as using physical hardware in VMs works against much of the advantages of virtualization.
 
That is not how virtualization works. You got 2 Options:
1.) Use disk passthrough. With that you can passthrough individual disks form the host into the VM but the VM won't see your real disks. All the VM will see are virtual disks that map to your physical disks. So stuff like SMART won't be available in the VM and you get additional overhead and the VM won't start in case a disk fails.
2.) Buy a HBA card, put it in your first PCIe 16x slot and then use PCI passthrough to passthrough the whole HBA card into that VM with all disks attached to that HBA. That is the only way how a VM could directly access the real physical disks. But you need a mainboard/BIOS/CPU/HBA that supports this.
 
Thank you for your input leesteken and Dunuin. It seems that using an HBA would be the best option. Just two more questions. My previous post had a screenshot of the output for IOMMUv2 not being available but the proxmox documentation listed as AMD CPUs having it on by default. Does this mean that IOMMU is enabled or not? Also for the disk passthrough, if a disk fails, is there no way to rebuild the array since the VM won't start?
 
Also for the disk passthrough, if a disk fails, is there no way to rebuild the array since the VM won't start?
Jup. Lets say you got a raidz1 with 3 disks. You passthrough the 3 disks and 1 of these disks will fail. Usually that wouldn`t be a problem and TrueNAS would continue running normally, as losing a single disk is perfectly fine. But when a passedthrough device is missing, PVE won't let you start the VM. You would first need to remove that failed passthroughed disk from the VM config so that PVE can start that VM.
My previous post had a screenshot of the output for IOMMUv2 not being available but the proxmox documentation listed as AMD CPUs having it on by default. Does this mean that IOMMU is enabled or not?
You need to enable some features in the BIOS/UEFI. And then you need to do most of the stuff described here to actually enable IOMMU:
https://pve.proxmox.com/wiki/Pci_passthrough
Only after that you can assign a PCI device to a VM using the webUI.

Also keep in mind that you can't PCI passthrough individual disks (except for NVMes) and usually you can't passthrough the chipsets disk controller, because it usually shares a IOMMU group with other mainboard stuff. And even if you could passthroug that chipsets disk controller, it would passthrough the whole controller with all disks attached to it. And when passing through hardware your host can't use that hardware any longer. So you can't use the same controller for your PVE system disks/VM/LXC storage and your TrueNAS disks.

And 16GB RAM is really low...did you read the hardware recommendation for TrueNAS?

Memory Sizing​

TrueNAS has higher memory requirements than many Network Attached Storage solutions for good reason: it shares dynamic random-access memory (DRAM or simply RAM) between sharing services, add-on plugins, jails, and virtual machines, and sophisticated read caching. RAM rarely goes unused on a TrueNAS system and enough RAM is key to maintaining peak performance. You should have at least 8 GB of RAM for basic TrueNAS operations with up to eight drives. Other use cases each have distinct RAM requirements:

  • Add 1 GB for each drive added after eight to benefit most use cases.
  • Add extra RAM (in general) if more clients will connect to the TrueNAS system. A 20 TB pool backing lots of high-performance VMs over iSCSI might need more RAM than a 200 TB pool storing archival data. If using iSCSI to back VMs, plan to use at least 16 GB of RAM for reasonable performance and 32 GB or more for optimal performance.
  • Add 2 GB of RAM for directory services for the winbind internal cache.
  • Add more RAM as required for plugins and jails as each has specific application RAM requirements.
  • Add more RAM for virtual machines with a guest operating system and application RAM requirements.
  • Add the suggested 5 GB per TB of storage for deduplication that depends on an in-RAM deduplication table.
  • Add approximately 1 GB of RAM (conservative estimate) for every 50 GB of L2ARC in your pool. Attaching an L2ARC drive to a pool uses some RAM, too. ZFS needs metadata in ARC to know what data is in L2ARC.
Don't know how big your three WD Reds are, but 16GB RAM is at the lower end of what you would use just for TrueNAS alone.
 
Last edited:
  • Like
Reactions: leesteken
Holdup, could I just use Proxmox as a NAS in this case by setting up a ZFS within my own server instead of a VM? If possible, could I be directed to a forum or post that dives deeper into this subject? Thank you again for your reply, I appreciate you dealing with my stupidity and ignorance.
 
Sure, but PVE doesn't offer any NAS features. But it is based on Debian, so you can do anything you could do with a Debin server. Running a SMB/NFS server too. But you would need to set everything up yourself using the CLI editing config files. There is no GUI/webUI for that.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!