Adding Virtual HDDs to VM: VirtIO SCSI vs. VirtIO Block on SSD-Backed ZFS Mirror?

Sep 1, 2022
239
46
33
40
Hello,

I'm still in the middle of LearnLinux TV's Proxmox class, and I've created a total of two (2) test VMs, so it's fair to say I'm pretty new at this. So, what follows is probably a dumb question, but hours of googling just confused me more.

For VMs stored on an SSD-backed ZFS mirror pool, should I be using the VirtIO SCSI driver, or the VirtIO Block driver? When is one preferable to the other?
The LLTV class isn't using ZFS, so it's not much help on this point. I've run into a conundrum:
  1. At first, all the tutorial articles were saying to use the SCSI driver, as it's newer and better overall.
  2. Then, it was suggested to me by a helpful person that I should be using VirtIO's block device driver because it has more direct access to the disk and isn't pretending to be an emulated HDD, so it makes things simpler. Which makes sense intuitively, but I'm left wondering why all the tutorials are oriented on the SCSI driver.
I'd appreciate any advice on what factors to use/when to choose one over the other, or perhaps "definitely don't use That One® when you're doing Task X."

Almost all my VMs are going to be some flavor of general use Linux or Windows, and I'm primarily concerned with (1) not introducing significant bottlenecks that wouldn't exist on a bare metal installation; and (2) not murdering my SSDs prematurely.

Even if you just have a link to share to a good article, that would be super helpful. I have a software engineering background, but I never touched ZFS until I decided to install Proxmox, so I'm pretty lost. I just want to start having fun with VMs with some confidence I'm not going to have Regrets® later.

Thanks!
 
For VMs stored on an SSD-backed ZFS mirror pool, should I be using the VirtIO SCSI driver, or the VirtIO Block driver?
In general, just go with the preset defaults for each guest OS type. The storage backend setting implies the used driver inside of the guest os and it has different features. SCSI is the easiest, because you can rely on the guest OS scsi implementation and do not need to reinvent the wheel. Nowadays the SCSI backend is the prefered one with the best feature set, although Block is also getting some love to get feature completeness.

There are a lot of benchmarks out there that compare the two models, also with respect to their features like this one. In simple VMs with just one or two disks, you will not run into any technical trouble. The impact of performance is not that huge if you use SCSI over Block or the other way around, but it will make a huge impact if you split your VM in multiple disks and use io threads for heavy use like in a database to split datafiles from onlinelogs and from back yielding 4 disks (OS+db stuff).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!