2 identical systems - one has duplicate id's found, other not

having my issue with

Western Digital 1TB WD Red SN700 NVMe - WDS100T1R0C​

will this proposed fix include these nvmes?

when can we expect a new pve8 with this fix?
 
Hi,

I have 2 x Lexar NM790 4TB and I'm running proxmox with kernel 6.2.16-12-pve. Unfortunately have the same issue.
Sometimes after reboot proxmox don't see any disk (lspci does), sometimes one and sometimes 2 are visible.
I had this Lexar nvmes before fix was integrated into the kernel. I've upgraded kernel but issue still exists.
Should I reset disk settings in proxmox or reinstall proxmox in order to have the issue fix?

thanks
 
Hi,
I have 2 x Lexar NM790 4TB and I'm running proxmox with kernel 6.2.16-12-pve. Unfortunately have the same issue.
Sometimes after reboot proxmox don't see any disk (lspci does), sometimes one and sometimes 2 are visible.
I had this Lexar nvmes before fix was integrated into the kernel. I've upgraded kernel but issue still exists.
Should I reset disk settings in proxmox or reinstall proxmox in order to have the issue fix?
that does sound a bit different than the other issues reported here. Please check the system logs/journal. What are the exact error messages you get?
 
hi @fiona , i've checked logs and see

Bash:
nvme nvme0: Device not ready; aborting initialisation, CSTS=0x0
nvme nvme1: Device not ready; aborting initialisation, CSTS=0x0

I see that there is a bug ticket for this issue https://bugzilla.kernel.org/show_bug.cgi?id=217863

does that mean that as soon as the working patch will be merged to the new kernel version it will also be available soon in proxmox?
 
I am have having the same issue with Intel DC P4608 Series 6.4TB HHHL PCIe, only 1 drive shows in Proxmox. Only 1 drive shows in Cisco CIMC as well so I am not confident it's a Proxmox problem at this time.
 
hi @fiona , i've checked logs and see

Bash:
nvme nvme0: Device not ready; aborting initialisation, CSTS=0x0
nvme nvme1: Device not ready; aborting initialisation, CSTS=0x0

I see that there is a bug ticket for this issue https://bugzilla.kernel.org/show_bug.cgi?id=217863

does that mean that as soon as the working patch will be merged to the new kernel version it will also be available soon in proxmox?
The patch submitted for stable seems to be here: https://lore.kernel.org/stable/20230920112839.812803999@linuxfoundation.org/
It's unfortunately not trivial, so it might be better to wait until it comes in via the Ubuntu kernel tree (might take a few weeks).
 
Booting with kernel 6.1.10-1-pve seems to be a valid workaround...
NVME @ minisforum um773 is not found in Proxmox when booting 6.2.16 kernel.

I pinned kernel 6.1.10-1 and I'm waiting for a working 6.2.16 kernel.
 
Last edited:
Booting with kernel 6.1.10-1-pve seems to be a valid workaround...
NVME @ minisforum um773 is not found in Proxmox when booting 6.2.16 kernel.

I pinned kernel 6.1.10-1 and I'm waiting for a working 6.2.16 kernel.
hi,
how could I also do this?
mine is detected, but starts to fail in about a day
Linux pve 6.2.16-14-pve
 
The patch submitted for stable seems to be here: https://lore.kernel.org/stable/20230920112839.812803999@linuxfoundation.org/
It's unfortunately not trivial, so it might be better to wait until it comes in via the Ubuntu kernel tree (might take a few weeks).
A build of kernel 6.5, which will include the fix, is planned be released for testing soon-ish (likely in the next few weeks): https://forum.proxmox.com/threads/proxmox-ve-8-0-released.129320/post-573961

EDIT: Unfortunately, as pointed out by @Instantus in the other thread, the mentioned fix is not included in the current build of 6.5 yet. The fix is in mainline 6.5.5, but the current build is based on 6.5.3. It seems that there were multiple issues and some are already addressed, but not yet that one.
 
Last edited:
  • Like
Reactions: senkis
hi @fiona , i've checked logs and see

Bash:
nvme nvme0: Device not ready; aborting initialisation, CSTS=0x0
nvme nvme1: Device not ready; aborting initialisation, CSTS=0x0

I see that there is a bug ticket for this issue https://bugzilla.kernel.org/show_bug.cgi?id=217863

does that mean that as soon as the working patch will be merged to the new kernel version it will also be available soon in proxmox?
FYI, the current build of the 6.5 kernel, i.e. proxmox-kernel-6.5.11-1-pve, finally includes a backport to fix that bug.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!