[SOLVED] SIngle NVME on two hosts - recomended config?

Nov 13, 2020
9
0
21
43
I seek your wisdom ProxMox community!
We have an existing cluster with two nodes, (exactly the same hw configuration) with existing storage and running VMs.
Additionaly to the existing raid storage, the server boards have single m.2 NVME ports.
We have installed an NVME (Seagate Firecuda 520) in to the nodes and its is visible as /dev/nvme0n1
We intend to use the whole nvme disk for one single VM. So to my question.
Would it be better to:
- assign the nvme as a PCIexpress device for the VM. Use backup software to recover the VM on second node when needed.
or
- configure it as storage on the hosts (ZFS single disk for example) and replicate the content between nodes?
I am thankful for your advice, which is mostly based on experience :)
 
Last edited:
Passing through the disk directly can make live migration a bit tricky and if it fails, you will need to restore from backup.

If you want to be able to move / recover the VM to the other node and that is why you have that NVME there as well, you could set up a single disk ZFS pool and use the PVE replication feature to always have a somewhat up to date copy. It will cost a little performance of course.
 
  • Like
Reactions: Astorian
Passing through the disk directly can make live migration a bit tricky and if it fails, you will need to restore from backup.

If you want to be able to move / recover the VM to the other node and that is why you have that NVME there as well, you could set up a single disk ZFS pool and use the PVE replication feature to always have a somewhat up to date copy. It will cost a little performance of course.
Hello Aaron, sorry for the late reply.
I agree with you that passing disk through directly leaves me basically with restoring from backup if it fails.
Since we have two identical nodes, we decided to test both options. I created two VMs with same config 2vcpu, 8GB memory, EFI bios.
First VM had the nvme directly assigned as PCIE device. Disk speed test result:

PCIE_NVME.png

Second VM disk has been created as VirtIO ISCSI with writeback cache and TRIM, on the ZFS singe disk storage.
Disk speed test result was showing extreme sequential read/write values:
zfs.png
How could i check the real speed on the ZFS located disk?
 
Seems your second benchmark is not benchmarking the disks, instead you benchmark your cache.

And yes, doing benchmarks the right way can be tricky and you need a deep understanding. A good tool, fio.
 
  • Like
Reactions: Dunuin
Seems your second benchmark is not benchmarking the disks, instead you benchmark your cache.

And yes, doing benchmarks the right way can be tricky and you need a deep understanding. A good tool, fio.
Thank you for the fio link.
I have also found the DISKSPD and started to read into things Cache warmup and other techniques to get accurate results.
Happy end of a thread :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!