I found a solution that worked for me:
- (install apt-get install nvme-cli)
- Check "nvme get-feature /dev/nvme0 -f 0xc -H" // Scroll Up, first Line "Autonomous Power State Transition Enable (APSTE): Enabled"
- nano /etc/kernel/cmdline
- Append "boot=zfs nvme_core.default_ps_max_latency_us=0 pcie_aspm=off"
- "root=ZFS=rpool/ROOT/pve-1 boot=zfs boot=zfs nvme_core.default_ps_max_latency_us=0 pcie_aspm=off"
- proxmox-boot-tool refresh
- Reboot
- Check "nvme get-feature /dev/nvme0 -f 0xc -H" again // Autonomous Power State Transition Enable (APSTE): Disabled
No Problems so far, since weeks.
Thanks! But... already have those in my GRUB file, and can confirm APSTE is Disabled for the two 990 Pro drives, but I get the following for my two u.2 drives:
Code:
# nvme get-feature /dev/nvme2 -f 0xc -H
NVMe status: Invalid Field in Command: A reserved coded value or an unsupported value in a defined field(0x2)
Maybe I can find out what will return that info, if available, on them.
I've been hunting for opposite solutions too. I can't figure out, yet, how to get ASPM to show that it's ENABLED.
I have:
Enabled ASPM in BIOS
Removed 'pcei_aspm=off' from GRUB
Rebooted
cat /proc/cmdline to confirm it was as expected after change
'lspci -vvv' and observed all PCIe devices still show linkctl of 'ASPM Disabled'
Added 'pcie_aspm=on' to GRUB
cat /proc/cmdline to confirm it was as expected after change
'spci -vvv' and observed all PCIe devices still show linkctl of 'ASPM Disabled'
I just want to prove out that this ASPM setting is actually doing something, and needed, not just anecdotal.