NVME enumeration question

jnewman33

New Member
Dec 1, 2022
16
1
3
I am provisioning several Minisforum MS-01 boxes for a new cluster. Each has two nvme 2TB drives. I am attempting to install Proxmox on ZFS on the first drive and will later use the second for Ceph.

Install of Proxmox is ok all three. Two of the three show nvme0 as the proxmox drive. The third machine shows nvme1 as the proxmox drive and nvme0 as the unused drive. I would love to have all three with the same enumeration. Hopefully I am missing something simple here!

Hope I have given enough information!

James
 
I am provisioning several Minisforum MS-01 boxes for a new cluster. Each has two nvme 2TB drives. I am attempting to install Proxmox on ZFS on the first drive and will later use the second for Ceph.

Install of Proxmox is ok all three. Two of the three show nvme0 as the proxmox drive. The third machine shows nvme1 as the proxmox drive and nvme0 as the unused drive. I would love to have all three with the same enumeration. Hopefully I am missing something simple here!

Hope I have given enough information!

James
Hi,
the Linux Kernel enumerates the drives in the system as found. You can always get the drive by label or by uuid to identify it e.g. ls -la /dev/disk/by-label /dev/disk/by-uuid

use the second for Ceph.
On a side note: Please note that Ceph requires a high bandwidth network and we recommend at least 3 OSD per node in order for it to work reliably. Otherwise you will soon run into issues.
 
Chris-
Thank you for your informative reply. Just to clarify, the nvme0n1 and nvme1n1 that I see under disks in the GUI are assigned by Proxmox using the enumeration provided by the Linux Kernel as found? Its jsut a little OCD that is killing me as two machines have the unused drive as nvme1n1 and the other one has the unused drive as nvme01n1.

Also, thanks for the info regarding Ceph. I am excited to get these boxes provisioned as they have 2x2.5 and 2x10G ports each.
 
10 Gbit doesn't count as fast NICs these days when 40Gbit or 100Gbit are not that uncommon to use in an enterprise environment. Or even up to 400Gbit NICs. The manual recommends to have 25 Gbit and up + 10 Gbit and up + 1Gbit for ceph:
https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster said:
For estimating your bandwidth needs, you need to take the performance of your disks into account.. While a single HDD might not saturate a 1 Gb link, multiple HDD OSDs per node can already saturate 10 Gbps too. If modern NVMe-attached SSDs are used, a single one can already saturate 10 Gbps of bandwidth, or more. For such high-performance setups we recommend at least a 25 Gpbs, while even 40 Gbps or 100+ Gbps might be required to utilize the full performance potential of the underlying disks.
If unsure, we recommend using three (physical) separate networks for high-performance setups: * one very high bandwidth (25+ Gbps) network for Ceph (internal) cluster traffic. * one high bandwidth (10+ Gpbs) network for Ceph (public) traffic between the ceph server and ceph client storage traffic. Depending on your needs this can also be used to host the virtual guest traffic and the VM live-migration traffic. * one medium bandwidth (1 Gbps) exclusive for the latency sensitive corosync cluster communication.

And the 3 OSDs would mean 3 disks per node for ceph + 1 or better 2 disks for PVE. So 4 or 5 disks per node.

Will probably not run that bad with 10Gbit NICs but thats more a setup for testing and less for actual productive use.
 
Last edited:
10 Gbit doesn't count as fast NICs these days when 40Gbit or 100Gbit are not that uncommon to use in an enterprise environment. Or even up to 400Gbit NICs. The manual recommends to have 25 Gbit and up + 10 Gbit and up + 1Gbit for ceph:


And the 3 OSDs would mean 3 disks per node for ceph + 1 or better 2 disks for PVE. So 4 or 5 disks per node.

Will probably not run that bad with 1Gbit NICs but thats more a setup for testing and less for actual productive use.
Dunuin-

Thanks for that info. This is strictly homelab/hobby installation. I am hoping to utilize all 4 NICS as well as 2x20GB USB4 ports to maximize my bandwidth.