Passthrough drive & Performance

waps

New Member
Dec 22, 2021
2
0
1
49
Hi,


I am trying to setup PVE to host couple of VMs, including Openmediavault - which is used for a NAS and setup some samab shares (which can also be shared with other VMs)


My MB has 4 Sata ports and an NVME. The plan was to passthrough the entire SATA Controller to VMs. using a basic SATA SSD as a download drive and SATA HDDs as storage. The NVME would be used for Proxmox and VMs.


But then I got thinking why not buy a bigger NVME, partition that into two, one for Proxmox&VMs and the other to passthrough to VMs to be used as the download drive. Get rid of the slower SSD. Performance and reliability should be better?



I think I have read conflicting info, this post suggests that when using passthrough of a partition, the performance overhead is minimal

https://forum.proxmox.com/threads/passthrough-a-hdd-without-iommu.99661/#post-430223


However, this suggests that virtio scsi is much slower:

https://www.reddit.com/r/Proxmox/comments/ld3m1e/drawbacks_of_the_io_thread_option_for_virtio_scsi/

First, you should know that virtio scsi carries significantly more overhead than virtio blk, so it's only really there for compatibility.



Have I got it right, or confused the two ?

Thanls
 
The reddit post mentions things that simply are not true. The answer in the forums thread is right.
See also https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)

We usually recommend VirtIO SCSI, or `scsi` in the config, as it has the best support and best performance.
VirtIO SCSI Single controller will basically create a controller per disk. This is the only way to use the `I/O Thread` option. When selecting the I/O option, every disk's I/O will be handled in a separate thread instead of being shared. As a result the performance might get better, if you have enough (CPU) resources available.

Regarding passthrough of the onboard SATA controller, this might not even be possible. Most of the times onboard hardware is mixed together in the IOMMU groups, which means if you pass through one, you have to pass through others as well.
You can check it by running the following command: find /sys/kernel/iommu_groups/ -type l once IOMMU has been enabled and the node has been rebooted.
 
  • Like
Reactions: Dunuin
Does PVE support trim with virtio block meanwhile? If I remeber right discard/trim support for virtio block was only added this year and last time I looked PVE wasn't able to utilize it.
If discard isn't supported with virtio block that would be a nogo when using SSDs, ZFS or LVM-Thin.
 
It seems we don't check for bus type, but rather always set it either to `off` if detect_zeroes is disabled, `unmap` if discard is enabled or `on` when nothing is set (default).
Unless QEMU requires some special option just for VirtIO Block to use discard, it should be enabled if the version in question supports it.
 
  • Like
Reactions: Dunuin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!