NVMe over TCP support?

jsterr

Famous Member
Jul 24, 2020
872
256
108
33
Hello Community,

Does Proxmox VE and Proxmox (subscription wise) support NVMe over TCP Storages? Are there any prerequisites to meet?
I dont have any experience in nvme over tcp yet. As far as I know NVMe over TCP requires "nvme-cli" and some manual configuration right?

Thanks Jonas
 
There is no PVE integrated tooling, but aside from that it would work pretty much the same as any shared lun (eg, iscsi.)
So you also cant snapshot like in iSCSI?
 
all the same limitations apply.

WRT snapshots, there is a workaround but its a bit of a pain; instead of mapping luns to PVE and have PVE manage the store, you map the lun DIRECTLY to a guest, and use the storage snapshot facility. You'd need to build the quiescense orchestration in-guest but it can be done.
 
Does Proxmox VE and Proxmox (subscription wise) support NVMe over TCP Storages
Yes, PVE works excellently with NVMe/TCP. It comes with recent enough Kernel with stable NVMe/TCP support.
We test and support iSCSI and NVMe/TCP equally for our PVE customers.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: gurubert
Yes, PVE works excellently with NVMe/TCP. It comes with recent enough Kernel with stable NVMe/TCP support.
We test and support iSCSI and NVMe/TCP equally for our PVE customers.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

There are missing tools from the base Debian repo:
Code:
https://github.com/linux-nvme
- nvmecli / nvmetcli           # "nvmetcli" is missing from Debian build
- nvme-stas                         # no such package
- nvme-dem                        # no such package
- nvme-trace                       # no such package

Do you build those from source ?
 
It was as easy as running:

nvme connect \
-t tcp \
-a 172.16.50.150 \
-s 4420 \
-n nqn.2011-06.com.truenas:uuid:d3b4ac0f-ecee-48fa-8714-1bfc1e7becd3:nas-nvme-of

on the hosts.
 
Hi @PmUserZFS , apologies about missing your question earlier.

Seems like you found a way on your own! Our connection management is wrapped in the native storage Plugin, but at a high level its similar.

Whats the recommendation for live migrations support and nbest performance ? lvm thin? or using nvme-of directly?
Live Migration implies cluster, cluster implies shared storage, shared storage is NOT compatible with LVM-thin.
That said, in a single host application LVM-Thin is not mutually exclusive with NVMe. NVMe provides block devices, LVM uses block device.
You may want to look into using LVM Thick, but you need to make sure that your block devices are connected prior to LVM/PVE stack starts.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi @PmUserZFS , apologies about missing your question earlier.

Seems like you found a way on your own! Our connection management is wrapped in the native storage Plugin, but at a high level its similar.


Live Migration implies cluster, cluster implies shared storage, shared storage is NOT compatible with LVM-thin.
That said, in a single host application LVM-Thin is not mutually exclusive with NVMe. NVMe provides block devices, LVM uses block device.
You may want to look into using LVM Thick, but you need to make sure that your block devices are connected prior to LVM/PVE stack starts.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Seems that LVM is the way to go, unless you mount , nvme connect, the storage in the VM it self.

did a quick comparsion with nfs storage, gave a in the ballpark of 90% more iops.
Gonna tweak some more and look at latency. This is the 4th tier storage to the nas, primary is ssd/sas ceph. so performance is just a nice to have.
 
Last edited:
NVME over tcp will bring remote device to your Proxmox server, so they will looks like local disk.
From that standpoint, you can use it as LVM share, or, as I do in many customers, use it with OCFS2 with can have thin-prov and snapshot.
In fact, you can use it as glusterfs device too.
Just create a PVE storage as directory pointed to ocsf2 or glusterfs directory in Linux and set to be shared.
 
Last edited:
NVME over tcp will bring remote device to your Proxmox server, so they will looks like local disk.
From that standpoint, you can use it as LVM share, or, as I do in many customers, use it with OCFS2 with can have thin-prov and snapshot.
In fact, you can use it as glusterfs device too.
Just create a PVE storage as directory pointed to ocsf2 or glusterfs directory in Linux and set to be shared.
what would give least overhead? reduce latency
 
think OCFS2 is a good choice.
But it's not supported and propably never will. This makes it a no-go for enterprise environments. Imho lvm/thick Despite it's limitations is still the best option for classical SANs who don't support NFS.
 
Last edited:
But it's not supported and propably never will. This makes it a no-go for enterprise environments. Imho lvm/thick Despite it's limitations is still the best option for classical SANs who don't support NFS.
I benched lvm over nvme-of and nfs from the same zpool on truenas, lvm nvm have much better iops.
 
I benched lvm over nvme-of and nfs from the same zpool on truenas, lvm nvm have much better iops.
This is somehow expected since with NFS you have additional filesystem overhead compared to LVM/thick. I would expect that ocfs2 will also have lower iops than raw lvm/thick. My main point being is, that ocfs2 is not an officialyl supported storage backend for PVE which make it (imho) a big no for an enterprise environment.