Fix NVMe drives changing order on reboot

shurtugalPXMX

New Member
Feb 2, 2025
1
0
1
I'm new to Proxmox and Linux in general, so forgive me if some of this isn't using the correct terminology.

I have two NVMe drives. One is 1tb, split into two ~500GB partitions. One of those 500GB partitions was assigned to one of my VMs as "/dev/nvme1n1p1". I had to power off the server to physically move it, and when I powered back on the two NVMe drives were detected in reverse order, so now "/dev/nvme1n1p1" is a 128GB partition on the other NVMe drive and is, obviously, not useable in that VM anymore. - I don't know how I was able to cofigured the drive like this initially, as if I try to add a hard drive to the VM now I don't even see an option to select the other partitions.

I haven't changed anything, since I assume if I can get them to be detected in the right order the drive will just work as it has been for the last several weeks. That said, is there a way I can force the order the NVMe drives are detected at boot to avoid this happening in the future? I was thinking using UUIDs in fstab might be the fix, but I'm not quite sure what would come after "UUID=# /dev/nvme1n1", and I don't want to risk losing the data (it's not critical data, but would like to avoid hours of transfering data back.)
 
I think the order is unpredictable, just like /dev/sda, /dev/sdb etc. Why can't you use /dev/disk/by-id/nvme-... (which are stable, also for ata)?
EDIT: This is generic for Linux and not just Proxmox. Lots of information and guides on the internet about Linux drives and dev-nodes.