NAS VM set up - To disk passthrough or not? Trim?

sab0

New Member
Mar 23, 2023
7
0
1
Hi all,

I have a build using standard PC parts, and one of my VMs is OpenMediaVault NAS OS.

Storage wise I currently have:
There are a couple of things:
  • I've recently read that using that passthrough is not a good idea for NVMe and SSD, as the trim function isn't automatically set up. What do I need to do to ensure trim is being used? Bearing in mind I've not changed the settings other than saying "SSD emulation".
  • Would I be better off not passing through the SSDs, and making the discs proxmox storage, effectively making a virtual disk using up all the storage of the individual drives?
    • As the information on the SSD isn't critical they are not backed up - does this cause potential issues if the VM is restored?
    • As the information isn't being written to disk, it's being written to the virtual disk. I assume if it's not backed up using proxmox, I would lose the disc data if it's restored?
  • The cache default is "no cache" - as i'm passing through disks, should this be changed to "write through"? Is that for only SSDs or HDDs too?
  • An HBA might be better, however unfortunately, HBAs that support TRIM are more expensive and quite energy hungry. Living in the UK I would like a minimum increase in watts, so I was going to potentially go hybrid set up if I can, with SSDs using mobo SATA ports and HBA for HDDs. That might be the best of both worlds to allow me to use a cheap and power sipping HBA!
Thanks for any help, it's much appreciated.
 
I've recently read that using that passthrough is not a good idea for NVMe and SSD, as the trim function isn't automatically set up. What do I need to do to ensure trim is being used? Bearing in mind I've not changed the settings other than saying "SSD emulation".
You need to enable "discard" for the virtual disk and make sure that the OVM is also using discard/fstrim.

Would I be better off not passing through the SSDs, and making the discs proxmox storage, effectively making a virtual disk using up all the storage of the individual drives?
Disk Passthrough is using virtual disks too. Each physical disk got a virtual disks that is mapped to it. If you don't want reduced reliability I wouldn't JBOD/raid0 those disks on the PVe side.

As the information on the SSD isn't critical they are not backed up - does this cause potential issues if the VM is restored?
Not with disk passthrough. With virtual disks PVE would wipe them on restore when excluding disks. See here: https://forum.proxmox.com/threads/feature-request-advanced-restore-options-in-gui.109707/
 
  • Like
Reactions: sab0
You need to enable "discard" for the virtual disk and make sure that the OVM is also using discard/fstrim.


Disk Passthrough is using virtual disks too. Each physical disk got a virtual disks that is mapped to it. If you don't want reduced reliability I wouldn't JBOD/raid0 those disks on the PVe side.


Not with disk passthrough. With virtual disks PVE would wipe them on restore when excluding disks. See here: https://forum.proxmox.com/threads/feature-request-advanced-restore-options-in-gui.109707/

Hi Duneuin, thanks for your help.

Re: 1st point - I enable discard, and what do I change on OMV VM? I'm not sure if the OS sees them as SSDs, just as "QEMU HARDDISK".

Re: 2nd & 3rd point - I see now, that proxmox is still creating virtual disk, as that's why it comes through to the OS as a QEMU HARDDISK. This explains why SMART values don't come through to the OMV (NAS) OS. So really, it seems a much better idea to do the passthrough, as I am doing at the moment. This comes back to my first question, what are the optimum settings for NVMe and SSDs?
  1. SSD emulation
  2. Discard
  3. cache?
  4. anything else?
 
Re: 1st point - I enable discard, and what do I change on OMV VM? I'm not sure if the OS sees them as SSDs, just as "QEMU HARDDISK".
You either need to mount all your filesystems with the correct mount option (for example "discard" option for ext4) or set up a scheduled task like a daily "fstrim -a".


I see now, that proxmox is still creating virtual disk, as that's why it comes through to the OS as a QEMU HARDDISK. This explains why SMART values don't come through to the OMV (NAS) O
Yes, only option to work with the real physical disks is to PCI passthrough the whole HBA with all disks attached to it.

  1. SSD emulation
  2. Discard
  3. cache?
  4. anything else?
SSD emulation for SSDs. Discard for SSDs and SMR HDDs. Cache really depends. I usually keep it at "none" for best data integrity.
 
  • Like
Reactions: panchuz and sab0
You either need to mount all your filesystems with the correct mount option (for example "discard" option for ext4) or set up a scheduled task like a daily "fstrim -a".
They are all ext4 drives, so I will look into the discard option. In the meantime, I will set up a cron with "fstrim -a".

trim.jpg

Quite a bit trimmed!

Yes, only option to work with the real physical disks is to PCI passthrough the whole HBA with all disks attached to it.
Once I have some spare money, that will be the next step I think!
SSD emulation for SSDs. Discard for SSDs and SMR HDDs. Cache really depends. I usually keep it at "none" for best data integrity.

That's awesome information about SMR HDDs, as I have one. Thank you! Just to confirm something else, as the other VMs are running on local-lvm, they don't need discard as proxmox OS does that, correct?

Interesting you go for no cache; is that in-case of a power outage? From reading, disc cache seems pretty useful though.... Cache pass through would be best I presume if you wanted to run cache?

At the end of it all, my SSD pool has sped up, but I'm still getting slower SMB transfer speeds than my HDD pool strangely.... not sure where to go next tbh! Thanks for answering all my questions, it's really appreciated!
 
Last edited:
That's awesome information about SMR HDDs, as I have one. Thank you! Just to confirm something else, as the other VMs are running on local-lvm, they don't need discard as proxmox OS does that, correct?
Every VMs will need to discard/trim too and PVe cant do that for VMs, as PVE doesn't know about the filesystems used inside the VM. So you need to set up that yourself for each of your VMs.
For LXCs on LVM-thin pools you need to regularily run pct fstrim YourVMID on the PVE host.

Don't overestimate the fast performance numbers manufacturers advertise SSDs with. Depending on the workload SSDs can be slower than HDDs. Especially when using consumer/prosumer SSDs for server workloads. Consumer SSDs are just made for short bursts of writes. As soon as you write a bit more at a time or you continously write to it with low to medium bandwidth, they will become really slow as soon as the DRAM/SLC caches are completely filled up.
 
Last edited:
  • Like
Reactions: sab0
Every VMs will need to discard/trim too and PVe cant do that for VMs, as PVE doesn't know about the filesystems used inside the VM. So you need to set up that yourself for each of your VMs.
For LXCs on LVM-thin pools you need to regularily run pct fstrim YourVMID on the PVE host.
Ok, yep I understand. Trimmed everything (VMs and CTs), and am about to start adding cron jobs to all VMs now!
Don't overestimate the fast performance numbers manufacturers advertise SSDs with. Depending on the workload SSDs can be slower than HDDs. Especially when using consumer/prosumer SSDs for server workloads. Consumer SSDs are just made for short bursts of writes. As soon as you write a bit more at a time or you continously write to it with low to medium bandwidth, they will become really slow as soon as the DRAM/SLC caches are completely filled up.

Wow I had no idea, that's exactly what I see! They are consumer SSDs that I've taken out of old family laptops. I see fast write speeds then it drops to 60MB/s. Whereas my HDD speeds are consistently at 100+ MB/s. Every day is a school day!

To cache or not to cache is my last decision, so I'll do some reading up on that. Thanks, Dunuin :)
 
Wow I had no idea, that's exactly what I see! They are consumer SSDs that I've taken out of old family laptops. I see fast write speeds then it drops to 60MB/s. Whereas my HDD speeds are consistently at 100+ MB/s. Every day is a school day!
Yes, that is one point beside better monitoring, better data integrity, better sync write performance and better durability why Enterprise grade SSDs are recommended for servers. The write performance will also drop a bit but not as hard as consumer SSDs do, because they often use higher quality NAND chips, got bigger caches and a bigger spare area which can be used for wear leveling and maintenance stuff.
To cache or not to cache is my last decision, so I'll do some reading up on that. Thanks, Dunuin
https://pve.proxmox.com/wiki/Performance_Tweaks#Disk_Cache
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!