unable to see local NVME

nb_yami

Member
Jun 5, 2020
2
0
6
33
Hi all,

Fairly new to Linux and very New to Proxmox and in need of some help setting up a test machine

I have a desktop (HP Prodesk 400 G5) with an NVME (240GB) drive and an added SSD (120GB). When installing Proxmox VE I was able to select either drive for install (chose the SSD @ set it to 90GB).

Now in the GUI and on the server, I am unable to locate/see the NVME drive, not sure what I am missing?



thanks
 
After a reboot and trying some other stuff I ran the command below, but could still only see the first drive

~# lsblk

After a reboot and trying the command again the disk was visible, it was also visible from the GUI

Issue resolved
 
What are the steps that have been performed to fix this, because i'm experiencing the same issues with a normal SSD and a NVMe that is not visible.. even through lsblk
 
What are the steps that have been performed to fix this, because i'm experiencing the same issues with a normal SSD and a NVMe that is not visible.. even through lsblk
The first thing to check would be that they are seated & firmly connected. If all connection seem ok; if any external USB connection is being used then try changing cable / USB port.

If this still fails - try Live-Booting the PC with any live USB (any distro) and then check through lsblk if the drives are visible. If they are still not visible - you've got HW issues. If they are visible - somethings up with your Proxmox install.
 
The first thing to check would be that they are seated & firmly connected. If all connection seem ok; if any external USB connection is being used then try changing cable / USB port.

If this still fails - try Live-Booting the PC with any live USB (any distro) and then check through lsblk if the drives are visible. If they are still not visible - you've got HW issues. If they are visible - somethings up with your Proxmox install.
Thanks for the quick response, in that case the NVMe is seated firmly and is recognized when installing Windows Server 2022, but not when i'm attempting to install Proxmox and it's also not visible when using a live CD of Fedora. When i've installed it on a SSD that is connected through S-ATA next to the NVMe driver, it's also not recognized when checking through lsblk

For some reason, i'm not able to find anything regarding this issue on a HP Prodesk 400 G7.
 
Thanks for the quick response, in that case the NVMe is seated firmly and is recognized when installing Windows Server 2022, but not when i'm attempting to install Proxmox and it's also not visible when using a live CD of Fedora.
Looks like your NVMe drive (or motherboard combination) is not compatible with Linux in general. Maybe wait for Proxmox VE 9 and try again, or try another NVMe drive (or another motherboard/PC)?
 
Last edited:
Looks like your NVMe drive (or motherboard combination) is not compatible with Linux in general. Maybe wait for Proxmox VE 9 and try again, or try another NVMe drive (or another motherboard/PC)?
Well, that's what i've tried.. by installing both NVMe's that are not visible during the installation of Proxmox.. But both NVMe's are visible during the installation on my Lenovo Thinkpad E490.

In that case i'll wait until the next release of Proxmox
 
Thanks for the quick response, in that case the NVMe is seated firmly and is recognized when installing Windows Server 2022, but not when i'm attempting to install Proxmox and it's also not visible when using a live CD of Fedora. When i've installed it on a SSD that is connected through S-ATA next to the NVMe driver, it's also not recognized when checking through lsblk

For some reason, i'm not able to find anything regarding this issue on a HP Prodesk 400 G7.
An interesting situation you have there. As leesteken points out this would indicate a general Linux incompatibility. However, seeing that your HP Prodesk 400 G7 is an intel 10th Gen issue, which makes it approximately almost 4 years old, its surprising that Googling a search for this issue does not raise any relevant results. It is possible that you have a unique situation where the PCIe controller + your specific NVME have a Linux only problem. That is still very interesting.

The only things I can suggest:

1. Have you entered the BIOS to check for settings that maybe PCIe/NVME related?
2. Is the BIOS updated?
3. Does the drive show up in the BIOS. (I'm going to imagine it does, if Windows Server managed to install on it).
4. Have you tried an alternate NVME (different make/model/size).
5. If you have an NVME to USB adapter around, does it show up in Linux.
6. There is also a possibility, that there is some loose HW connector/connection etc. that is a hit & miss, and is sometimes working.
 
An interesting situation you have there. As leesteken points out this would indicate a general Linux incompatibility. However, seeing that your HP Prodesk 400 G7 is an intel 10th Gen issue, which makes it approximately almost 4 years old, its surprising that Googling a search for this issue does not raise any relevant results. It is possible that you have a unique situation where the PCIe controller + your specific NVME have a Linux only problem. That is still very interesting.

The only things I can suggest:

1. Have you entered the BIOS to check for settings that maybe PCIe/NVME related?
2. Is the BIOS updated?
3. Does the drive show up in the BIOS. (I'm going to imagine it does, if Windows Server managed to install on it).
4. Have you tried an alternate NVME (different make/model/size).
5. If you have an NVME to USB adapter around, does it show up in Linux.
6. There is also a possibility, that there is some loose HW connector/connection etc. that is a hit & miss, and is sometimes working.
1. Have you entered the BIOS to check for settings that maybe PCIe/NVME related?
Checked and resetted the BIOS just in case

2. Is the BIOS updated?
This has been checked, according to HP it's up to date

3. Does the drive show up in the BIOS. (I'm going to imagine it does, if Windows Server managed to install on it).
Yes, the driver is visible when entering the BIOS and the boot menu before the OS is loaded

4. Have you tried an alternate NVME (different make/model/size).
Yes, both the NVMe drivers from Samsung an the pre installed from a Lenovo Thinkpad 490 are not visible from the OS (except for Windows)

5. If you have an NVME to USB adapter around, does it show up in Linux.
This is not something i have on-prem

6. There is also a possibility, that there is some loose HW connector/connection etc. that is a hit & miss, and is sometimes working
That's what i think a very rare case and the problems are only pressent when installing or boosting a Linux based OS, when installing en boosting Windows it's imidiatly recognized (Example, Windows 11, Windows Server 2022)

But for some reason, i've seen that is also not visible for Windows Server 2019, Windows 10, Proxmox, Fedora 39. So i think its regarding the drivers at the boot proces of the installation files that are required.
 
Since you see such a wide/varied difference in OSs - I think you need to mess with BOOT/UEFI/SB/CSM etc. Sometimes NVME can be finicky with these.

If I were you (assuming you haven't tried it already) start with a legacy / non Secure Boot in BIOS & see if you can install a regular distro. If it works you've got your answer.
 
  • Like
Reactions: kmbit
Hi, I faced the same issue as you!

I tried to install Linux Mint to see if this was only Proxmox or Linux in general - and Mint told me that Intel Optane/RST was activated which makes it impossible to install. Disabled it in the BIOS (Advanced > System Settings > Intel Optane) and it works now! Hope this works for you too!
 
  • Like
Reactions: kmbit
Hi, I faced the same issue as you!

I tried to install Linux Mint to see if this was only Proxmox or Linux in general - and Mint told me that Intel Optane/RST was activated which makes it impossible to install. Disabled it in the BIOS (Advanced > System Settings > Intel Optane) and it works now! Hope this works for you too!
This has resolved the issue! I'm currently able to install Proxmox on the NVMe
 
I run in that problem too.
If you fight through the
Code:
dmesg
output (in debug mode) you find the solution.
In my case it was a BIOS setting - I had to switcht the SATA Config from RAID ON --> AHCI.

A very good explaination from a different forum to Dell Desktop devices:
Short version: If you're about to clean install an OS, switch to AHCI. RAID mode offers no benefit on an XPS 13 that only supports a single SSD. If you're curious about why it's there, read on.

Long version:
RAID mode seems to be the default on most if not all Dell laptops and desktops that support it, except for the handful of systems that Dell offers with Linux pre-installed from the factory. I suspect Dell does it these days simply to standardize their builds a bit and also because it doesn't have any downsides for them, but it can have some downsides for users.

RAID mode activates the Intel Rapid Storage controller, which abstracts the storage from the OS and allows certain other features to be used. Back in the Windows 7 days, that abstraction meant that RAID mode could be used to allow Windows 7 to be installed onto NVMe SSDs. Windows 7 didn't have native support for NVMe, but with RAID mode, the OS just needs the Intel Rapid Storage driver and then it doesn't matter to the OS that the storage "behind" that controller is NVMe. By comparison, AHCI mode exposes the storage directly to the OS, which means the OS needs to have native support for the storage device's data interface, i.e. NVMe in this case.

But RAID mode is also required for using certain other features, such as Intel Rapid Start, Intel Smart Response, and more recently Intel Optane. But the first two are only used when you're pairing a spinning hard drive with a small SSD cache, and Optane is only used with actual Optane devices.

In terms of downsides:


  • Depending on the generation of Rapid Storage controller in your system and the version of Windows you're installing, RAID mode means that you might need to supply the Intel RST driver during Windows Setup to allow it to see your SSD. Not a big deal for Dell since it's just one more driver they'd have to inject during their factory setup process.
  • RAID mode prevents you from using Linux with the internal disk since Linux doesn't seem to have an Intel RST driver. Not an issue for Dell in most cases since they sell very few systems with Linux as a pre-installation option.
  • If you buy a retail SSD from a vendor that offers its own NVMe driver, such as Samsung's retail SSDs, then you can't use that driver if your system is in RAID mode. Not an issue for Dell since the SSDs they sell don't allow using those drivers. Even if you get a Samsung SSD shipped with your system, Samsung's NVMe driver won't work with it. You need to use a retail unit.

So again, for Dell I guess it makes sense to just use RAID mode everywhere for consistency, since they of course do sell some systems with Optane (and Smart Response and Rapid Start in the past), as well as other systems that actually do have multiple disks and therefore support actual RAID setups. And the downsides don't really matter to them.

But for individual users performing a clean install, switching to AHCI means you don't have to worry about providing an Intel RST driver, you can use Linux if desired, and you can use a vendor-provided NVMe driver if desired (and available).

However, this setting is only really meant to be changed before reinstalling an OS. If you want to switch WITHOUT doing that, you'll render your OS unbootable until you switch back. Apparently it's possible to work around this by booting into Safe Mode ONCE after making the switch, which will allow Windows to start and reconfigure itself. After that, you should be able to boot normally. But if you don't need any of the benefits of AHCI mode, you're not really losing anything by sticking with RAID.