buggy network with new nvme device

gvdb

Member
Dec 9, 2018
24
0
21
57
Hi to all
I get a issue with my PVE 6.0-15. I change my m2 2280 SATA SSD hard drive to m2 2280 PCIe-NVME SATA SSD hard drive.
Target is to get more IOPS with SS pro version (3000/3500MB/sec).

After setting on the BIOS machine to PCIE from SATA, Proxmox start correctly, but no network.
IP are correctly up, but no trafic network from one interface.

After compare dmesg on the two situations, i see the PCI list are different of course. But iommu group are different. I don't if this is my issue. but the pci slot assigned to the network card in malfunction, is not on the same group.

The network device is on 0000:01:00.0 to group 12 (normal situation)

Maybe need i to change iommu group as normal situation (SATA mode) ?

Any persons confirm that ?

I m forwarding logs for make sure ...

I appreciate :)
 

Attachments

After compare dmesg on the two situations, i see the PCI list are different of course. But iommu group are different. I don't if this is my issue. but the pci slot assigned to the network card in malfunction, is not on the same group.
The PCIe device enumeration has changed, with the addition of the nvme. You should see a new interface name ip link. Adapt the /etc/network/interfaces with the new name.
 
  • Like
Reactions: Stoiko Ivanov
Hi Alwin,
Stupid i am. I forget the new kernel process :)
Sure, you're right, any shift pci slot create new network name of devices plugged.
Thanks to refreshed my mind
:p;)
 
Well, another issue. Network works fine.
But now there's some faults errors on the new drive.
like this:

[ 1602.519124] nvme nvme0: ctrl returned bogus length: 16 for NVME_NIDT_EUI64
[ 1602.554864] nvme nvme0: ctrl returned bogus length: 16 for NVME_NIDT_EUI64
[ 1811.054686] dmar_fault: 50 callbacks suppressed
[ 1811.054687] DMAR: DRHD: handling fault status reg 3
[ 1811.054705] DMAR: [DMA Read] Request device [01:00.0] fault addr 0 [fault reason 06] PTE Read access is not set
[ 1811.054861] DMAR: DRHD: handling fault status reg 3
[ 1811.055682] DMAR: [DMA Read] Request device [01:00.0] fault addr 0 [fault reason 06] PTE Read access is not set
[ 1811.056536] DMAR: DRHD: handling fault status reg 3
[ 1811.057326] print_req_error: I/O error, dev nvme0n1, sector 257951744 flags 4003

Each time a VM is removing, there are this kind of message.
Some iommu buggy options ?

I read something like graphical dma conflict with intel cards... intel_iommu=igfx_off
I'm gonna test
 
Last edited:
it doesn't
Well, another test with intel_iommu=pt, it seems to be stable.
Many smart error logs present.
 
Last edited:
Try to update the firmware of the nvme and BIOS of the motherboard.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!