help with first setup of new hdds

escaparrac

New Member
Jun 2, 2022
23
4
3
Hi guys.

I'm building my first server and I started just trying some VMs, learning Docker, learning how to passthrough iGPU and so.

main specs:
i5 10400 (6/12)
16gb ddr4 ram (I plan to put 16gb more)

Now I bought 2 4TB NAS disks and I want to start pouring data there. I will tell you my plan for you to judge:

My proxmox server is installed in a raidz1 of two 124gb ssds.
My main VMs and containers are now in a 500GB m.2 nvme

I want to add storage for my containers (plex, jellyfin, nextcloud), also for my photo/movie backups and to make backups of my VMs there.

I don't know what would be the best way to make it.

TrueNAS in raidz1 again and sharíng vía SMB?
Mounting them to proxmox?

I want to control which machines can access with datasets/folders from that 4TB drives.
Example: plex should have access to "movies" and "tv shows". Nextcloud could have access to everything but the backups folder. From my windows PC in the network I want to access everything, but from dad PC I want him to be able to read only the Pictures folder, for example.

How do you think it could be the best way to config that drives?

Have a nice day!
 
Hi,

if you want to share data between VMs, you need some kind of network share that all can access. You can run a Container with a samba share for example.
 
TrueNAS would be fine? I've read I can do SMB shares for every dataset.
https://www.truenas.com/docs/core/coretutorials/sharing/smb/smbshare/
If you want a GUI and stick with ZFS, yes. Just keep in mind that your TrueNAS won't be able to see or access the physical HDDs unless you PCI passthrough the complete SATA controller (which will passthrough all SATA ports) into that VM. You still could passthrough single disks using "qm set" disk passthrough, but then your TrueNAS would just be working with virtualized disks.

Also check that your HDDs aren't using SMR when using ZFS. Modern "WD Red" for example are using SMR and shouldn't be used with ZFS. For that you would need to buy the "WD Red Plus" or "WD Red Pro" so you get a CMR disk.
Similar with ZFS and SSDs. Its highly recommended to only use enterprise/datacenter grade SSD with ZFS. Otherwise your performance might be very low with specific workloads (like sync writes DBs will do), your SSD might be written to death within months because of the way higher wear and a power outage might kill your mirrored pool, as both SSDs will then fail at once loosing the cached data that waited to be written to NAND.
As far as I know I could mount that shares to Proxmox too right?
Jup
 
Last edited:
  • Like
Reactions: shrdlicka
If you want a GUI and stick with ZFS, yes. Just keep in mind that your TrueNAS won't be able to see or access the physical HDDs unless you PCI passthrough the complete SATA controller (which will passthrough all SATA ports) into that VM. You still could passthrough single disks using "qm set" disk passthrough, but then your TrueNAS would just be working with virtualized disks.

Also check that your HDDs aren't using SMR when using ZFS. Modern "WD Red" for example are using SMR and shouldn't be used with ZFS. For that you would need to buy the "WD Red Plus" or "WD Red Pro" so you get a CMR disk.
Similar with ZFS and SSDs. Its highly recommended to only use enterprise/datacenter grade SSD with ZFS. Otherwise your performance might be very low with specific workloads (like sync writes DBs will do), your SSD might be written to death within months because of the way higher wear and a power outage might kill your mirrored pool, as both SSDs will then fail at once loosing the cached data that waited to be written to NAND.

Jup
The passthrough thing worries me. I just received my drives (WD RED PLUS and IRONWOLF).

Buying a standard SATA HBA card would do the trick? Or... What are the disadvantages of using the "qm set passthrough" thing? (realistically for a home lab) This doesn't sound frightening to me: "As the disk is attached to the physical and virtual host, this will also prevent Virtual Machine live migration. A second side effect is host system IO wait, when running ddrescue, other VM's running on the host can stutter."
 
The passthrough thing worries me. I just received my drives (WD RED PLUS and IRONWOLF).

Buying a standard SATA HBA card would do the trick?
In case of TrueNAS Core you are very limited in the HBAs you could use. LSI HBAs got a good support, for everything else you should check the FreeBSD hardware compability list first: https://www.freebsd.org/releases/12.0R/hardware/#support
And PCI passthrough is a bit tricky and how good that is possible really depends on your hardware. Mainboard, CPU, PCI card, BIOS and drivers all need to support this. With consumer mainboards for example most of the time only the 1one or two PCIe 16x slots meant for the GPU will work.
Or... What are the disadvantages of using the "qm set passthrough" thing? (realistically for a home lab)
As the guest OS will only see virtual disks TrueNAS can't use SMART to monitor the disk wear (but you could monitor them in PVE).
The virtual disks TrueNAS will use 512B/512B logical/physical sectors even if your physical HDDs will use something like 512B/4K sectors.
You add another abstraction layer when there is additional virtualization between the guest OS and the physical disk, so the same points against HW raid card should count there too: https://openzfs.github.io/openzfs-docs/Performance and Tuning/Hardware.html#hardware-raid-controllers
I personally got some strange behavior when using "qm set" disk passthrough. Like partitions can be shown by fdisk from inside the guest but not from the host. I thinks its bad if you can move the disks to another host and import the ZFS pool there, because the partitions storing your ZFS are only visible within that VM. Not all passed through disks behaved like this, but the fact that half of them does makes me not really trust it.
And last point that comes to my mind: Additional virtualization also means more overhead. But looks like overhead isn't that big.
 
Last edited:
In case of TrueNAS Core you are very limited in the HBAs you could use. LSI HBAs got a good support, for everything else you should check the FreeBSD hardware compability list first: https://www.freebsd.org/releases/12.0R/hardware/#support

As the guest OS will only see virtual disks TrueNAS can't use SMART to monitor the disk wear (but you could monitor them in PVE).
The virtual disks TrueNAS will use 512B/512B logical/physical sectors even if your physical HDDs will use something like 512B/4K sectors.
You add another abstraction layer when there is additional virtualization between the guest OS and the physical disk, so the same points against HW raid card should count there too: https://openzfs.github.io/openzfs-docs/Performance and Tuning/Hardware.html#hardware-raid-controllers
I personally got some strange behavior when using "qm set" disk passthrough. Like partitions can be shown by fdisk from inside the guest but not from the host. I thinks its bad if you can move the disks to another host and import the ZFS pool there, because the partitions storing your ZFS are only visible within that VM. Not all passed through disks behaved like this, but the fact that half of them does makes me not really trust it.
And last point that comes to my mind: Additional virtualization also means more overhead. But looks like overhead isn't that big.
Thanks for the info. I'll investigate.

I was planning to use TrueNas Scale tho, this is a linux based OS right?
 
Thanks for the info. I'll investigate.

I was planning to use TrueNas Scale tho, this is a linux based OS right?
Jup, then you get way better driver supports because its linux. But with Scale keep in mind that it just came out of testing phase and isn't finished yet with all planned features. So not that great if you rely on that data you want to store. I will stick atleast for 2 more years with Core and wait for Scale to get more mature and optimized.
 
Last edited:
Jup, then you get way better driver supports because its linux. But with Scale keep in mind that it just came out of testing phase and isn't finished yet with all planned features. So not that great if you rely on that data you want to store. I will stick atleast for 2 more years with Core and wait for Scale to get more mature and optimized.
Thanks, then I will try to look for a compatible HBA card or explore how to mount them directly on proxmox and make SMB shares frome there.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!