Proxmox Hosting TrueNAS VM - Storage Questions

TEB0605

New Member
Jul 27, 2023
7
0
1
At the beginning of the year I converted my server from running directly on TrueNAS to Proxmox with a TrueNAS vm. At that time I was able to get all the physical disks passed through to the VM and reconfigure the RAID within the TrueNAS VM, this information will become relevant in what follows. I passed the disks directly to the vm following the recommended practices Passthrough Physical Disk to Virtual Machine (VM) changing the command to match my disk ID and adjusting it for each mount point
Code:
qm set 592 -scsi2 /dev/disk/by-id/ata-ST3000DM001-1CH166_Z1F41BLC

While I have had decent performance with the SMB share from TrueNAS, I have not been able to get "normal" scrub times on the RAIDZ2 within TrueNAS, I know that slow/unsuccessful scrubs can mean slow or possibly unsuccessful resilvers should a disk go down. I should mention all of my disks are less than a year old so they shouldnt fail anytime soon but you never know.

I have been trying to track down if there are any settings that I should have added or changed within Proxmox to allow for best performance that could be affecting the scrubs. Here are a few things that I am wondering may be the cause of the slow scrubs(4-5 days)

The disks that were initially used before converting the TrueNAS server to a Proxmox VM are showing differently in the Hardware list for the VM than the disks I added directly to Proxmox later. In the proxmox host disk list, the drives that were existing in the original TrueNAS configuration are listed as
Code:
/dev/sdb

   |_/dev/sdb1     Linux RAID Member     2.15GB       GPT=yes     Mounted=No

   |_/dev/sdb2     ZFS                                  8TB              GPT=yes     Mounted=No
Where as the disks added once TrueNAS had been moved to Proxmox as a VM are listed as follows
Code:
/dev/sdf     zfs_member     8TB       GPT=no     Mounted=No

I know that TrueNAS is often picky about serial numbers being added, and I have done so for all the disks, see my disk configuration from the VM config file
Code:
sata0: /dev/disk/by-id/ata-ST8000NM0055-1RM112_ZA1AEE6R,backup=0,serial=ZA1AEE6R,size=7814026584K
sata2: /dev/disk/by-id/ata-ST8000NM0055-1RM112_ZA1AEE3R,backup=0,serial=ZA1AEE3R,size=7814026584K
sata3: /dev/disk/by-id/ata-WDC_WD82PURZ-85TEUY0_VDKDGSSK,backup=0,serial=VDKDGSSK,size=7814026584K
sata4: /dev/disk/by-id/ata-ST8000NM0055-1RM112_ZA1AEFAB,backup=0,serial=ZA1AEFAB,size=7814026584K
sata5: /dev/disk/by-id/ata-WDC_WD82PURZ-85TEUY0_VDKNZHJK,backup=0,serial=VDKNZHJK,size=7814026584K
scsi0: local-lvm:vm-100-disk-0,iothread=1,size=128G,ssd=1,discard=on
scsi10: /dev/disk/by-id/ata-ST8000NM0045-1RL112_ZA1A1AFF,backup=0,serial=ZA1A1AFF,size=7814026584K
scsi11: /dev/disk/by-id/ata-ST8000NM0045-1RL112_ZA1F90DV,backup=0,serial=ZA1F90DV,size=7814026584K
scsi12: /dev/disk/by-id/ata-ST8000NM0045-1RL112_ZA1FD9W3,backup=0,serial=ZA1FD9W3,size=7814026584K
scsi13: /dev/disk/by-id/ata-ST8000NM0045-1RL112_ZA1A2JFJ,backup=0,serial=ZA1A2JFJ,size=7814026584K
scsi2: /dev/disk/by-id/ata-WDC_WD82PURZ-85TEUY0_VDHVK1TK,backup=0,serial=VDHVK1TK,size=7814026584K
scsi3: /dev/disk/by-id/ata-ST8000NM0045-1RL112_ZA1HCM81,backup=0,serial=ZA1HCM81,size=7814026584K
scsi4: /dev/disk/by-id/ata-ST8000NM0045-1RL112_ZA1JPAM7,backup=0,serial=ZA1JPAM7,size=7814026584K
scsi5: /dev/disk/by-id/ata-ST8000NM0045-1RL112_ZA1FZQRS,backup=0,serial=ZA1FZQRS,size=7814026584K
scsi6: /dev/disk/by-id/ata-ST8000NM0045-1RL112_ZA1958YY,backup=0,serial=ZA1958YY,size=7814026584K
scsi8: /dev/disk/by-id/ata-OWC_Mercury_Electra_6G_SSD_OW1606171001F3204,backup=0,serial=OW1606171001F3204,size=234431064K,ssd=1
scsi9: /dev/disk/by-id/ata-ST8000NM0045-1RL112_ZA1FDK7T,backup=0,serial=ZA1FDK7T,size=7814026584K

I also am unsure what i should be using for cache on these disks and Async IO

I am totally open to any other configuration recommendations or changes to things I need to change, as I am new to the Proxmox community.

Server Info:
CPU(s) 12 x Intel(R) Xeon(R) CPU E5645 @ 2.40GHz (1 Socket)
Kernel Version Linux 5.15.102-1-pve #1 SMP PVE 5.15.102-1 (2023-03-14T13:48Z)
PVE Manager Version pve-manager/7.4-3/9002ab8a
48GB RAM
 
The RAID configuration in TrueNAS is as follows
3 vdevs with 5 disks each in a RAIDz2 with a 200GB SSD Cache disk

46GB of the RAM from the host is passed through to the TrueNAS VM

Disks are connected to the host using 2x 10 Port SATA PCIe x1 6Gbps SATA 3.0 Controller
Proxmox detects the SATA Controller as a ASM1061 SATA IDE Controller

I was also wondering if instead of detecting the disks in the host and passing them through to the VM I should just be doing PCI passthrough of the SATA Controller. I know my motherboard has IOMMU capabilities as I previously passed through a GPU that way. Would I lose the data if I try that?
 
Last edited:
With that much disks I would have bought a PCIe HBA card for PCI passthrough. That is the only option how your TrueNAS could directly access the real physical disks without additional overhead. Now your TrueNAS is still working with virtual disks that are mapped to the physical disks.

Disks are connected to the host using 2x 10 Port SATA PCIe x1 6Gbps SATA 3.0 Controller
15 disks on 2 PCIe lanes looks like a bottleneck. The CPU only supports PCIe 2.0 so ~0.4GB/s per lane. So 2 * 0.4GB/s / 15 ports = 53MB/s bandwidth per disk?
 
Last edited:
  • Like
Reactions: TEB0605
With that much disks I would have bought a PCIe HBA card for PCI passthrough. That is the only option how your TrueNAS could directly access the real physical disks without additional overhead. Now your TrueNAS is still working with virtual disks that are mapped to the physical disks.


20 disks on 2 PCIe lanes sound like a bottleneck. The CPU only supports PCIe 2.0 so ~0.4GB/s per lane. So 2 * 0.4GB/s / 20 ports = 25MB/s bandwidth per disk?
So basically I need to get a new motherboard, processor and a HBA card, to solve this problem.

I currently have another Proxmox server that has better specs. It sounds like I need to get the storage moved over to that server along with changing to an HBA card, then pass that HBA through to the TrueNAS vm directly

Second server specs:

CPU(s) 16 x AMD Ryzen 7 5700X 8-Core Processor (1 Socket)
Kernel Version Linux 6.2.16-3-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.16-3 (2023-06-17T05:58Z)
PVE Manager Version pve-manager/8.0.3/bbf3993334bfa916
64GB RAM
 
Sorry, should be "2 * 0.4GB/s / 15 ports = 53MB/s bandwidth per disk" but when hitting the pool with heavy load you should still be capped by the bandwidth of the PCIe lanes and not by the performance of the HDDs. Two cheap LSI SAS2008 would do the job with their PCIe 2.0 8x for 8 SATA ports.

And yes, faster CPU might help too. ZFS benefits from higher single-threaded performance. But not sure how much that might help with slow spinning rust. I always add some mirrored SSDs as special devices so the HDDs aren't hit any longer by all that metadata IO. Also make sure not to fragment your pool too much by completely filling it up. The more you fill it, the faster it will fragment and you can't defrag ZFS without moving all data off that pool and back.
 
Last edited:
  • Like
Reactions: TEB0605
Sorry, should be "2 * 0.4GB/s / 15 ports = 53MB/s bandwidth per disk" but when hitting the pool with heavy load you should still be capped by the bandwidth of the PCIe lanes and not by the performance of the HDDs. Two cheap LSI SAS2008 would do the job with their PCIe 2.0 8x for 8 SATA ports.

And yes, faster CPU might help too. ZFS benefits from higher single-threaded performance. But not sure how much that might help with slow spinning rust. I always add some mirrored SSDs as special devices so the HDDs aren't hit any longer by all that metadata IO. Also make sure not to fragment your pool too much by completely filling it up. The more you fill it, the faster it will fragment and you can't defrag ZFS without moving all data off that pool and back.
You are amazing thank you so much!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!