Intel IOMMU problems in 3.0RC2

kobuki

Renowned Member
Dec 30, 2008
474
28
93
On a test server, I experience some strange log entries, pasted below. The host and the guest is working flawlessly so far, but I have some concerns about this. I haven't been able to find a definitive answer on the net on this subject. Should I be worried, or is this just a warning? I'm redirecting an IBM ServeRAID M1015 (LSI SAS2008-based) adapter card. The mpt2sas driver is being loaded. The log seems as though the SAS card would conflict with an USB controller, but regardless of this, it's working.

Code:
# lspci
...
00:10.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
...

Code:
May 20 13:51:09 test1 qm[704676]: <root@pam> starting task UPID:test1:000AC0A6:016313C0:519A0E2D:qmstart:100:root@pam:
May 20 13:51:09 test1 qm[704678]: start VM 100: UPID:test1:000AC0A6:016313C0:519A0E2D:qmstart:100:root@pam:
May 20 13:51:10 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0xf (was 0x100, writing 0x10b)
May 20 13:51:10 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0xc (was 0x0, writing 0xfe600001)
May 20 13:51:10 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0x7 (was 0x4, writing 0xfe680004)
May 20 13:51:10 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0x5 (was 0x4, writing 0xfe6c0004)
May 20 13:51:10 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0x4 (was 0x1, writing 0xe001)
May 20 13:51:10 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0x3 (was 0x0, writing 0x10)
May 20 13:51:10 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0x1 (was 0x100000, writing 0x100003)
May 20 13:51:10 test1 kernel: device tap100i0 entered promiscuous mode
May 20 13:51:10 test1 kernel: vmbr0: port 2(tap100i0) entering forwarding state
May 20 13:51:10 test1 kernel: pci-stub 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
May 20 13:51:10 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0xf (was 0x100, writing 0x10b)
May 20 13:51:10 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0xc (was 0x0, writing 0xfe600001)
May 20 13:51:10 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0x7 (was 0x4, writing 0xfe680004)
May 20 13:51:10 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0x5 (was 0x4, writing 0xfe6c0004)
May 20 13:51:10 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0x4 (was 0x1, writing 0xe001)
May 20 13:51:10 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0x3 (was 0x0, writing 0x10)
May 20 13:51:10 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0x1 (was 0x100000, writing 0x100003)
May 20 13:51:11 test1 kernel: assign device: host bdf = 1:0:0
May 20 13:51:11 test1 kernel: IRQ handler type mismatch for IRQ 16
May 20 13:51:11 test1 kernel: current handler: ehci_hcd:usb1
May 20 13:51:11 test1 kernel: Pid: 704685, comm: kvm veid: 0 Not tainted 2.6.32-20-pve #1
May 20 13:51:11 test1 kernel: Call Trace:
May 20 13:51:11 test1 kernel: [<ffffffff810e95c7>] ? __setup_irq+0x3e7/0x440
May 20 13:51:11 test1 kernel: [<ffffffffa05cbc90>] ? kvm_assigned_dev_intr+0x0/0xf0 [kvm]
May 20 13:51:11 test1 kernel: [<ffffffff810e9704>] ? request_threaded_irq+0xe4/0x1e0
May 20 13:51:11 test1 kernel: [<ffffffffa05d162d>] ? kvm_vm_ioctl+0x100d/0x10f0 [kvm]
May 20 13:51:11 test1 kernel: [<ffffffff8142fd41>] ? pci_conf1_read+0xc1/0x120
May 20 13:51:11 test1 kernel: [<ffffffff81431953>] ? raw_pci_read+0x23/0x40
May 20 13:51:11 test1 kernel: [<ffffffff8128e7ba>] ? pci_read_config+0x25a/0x280
May 20 13:51:11 test1 kernel: [<ffffffffa05cfaba>] ? kvm_dev_ioctl+0xaa/0x4c0 [kvm]
May 20 13:51:11 test1 kernel: [<ffffffff811a6afa>] ? vfs_ioctl+0x2a/0xa0
May 20 13:51:11 test1 kernel: [<ffffffff8120fee6>] ? read+0x166/0x210
May 20 13:51:11 test1 kernel: [<ffffffff811a712e>] ? do_vfs_ioctl+0x7e/0x570
May 20 13:51:11 test1 kernel: [<ffffffff811933b6>] ? vfs_read+0x116/0x190
May 20 13:51:11 test1 kernel: [<ffffffff811a766f>] ? sys_ioctl+0x4f/0x80
May 20 13:51:11 test1 kernel: [<ffffffff8100b182>] ? system_call_fastpath+0x16/0x1b
May 20 13:51:11 test1 kernel: pci-stub 0000:01:00.0: irq 42 for MSI/MSI-X
May 20 13:51:11 test1 kernel: pci-stub 0000:01:00.0: Invalid ROM contents
May 20 13:51:12 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0xf (was 0x100, writing 0x10b)
May 20 13:51:12 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0xc (was 0x0, writing 0xfe600001)
May 20 13:51:12 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0x7 (was 0x4, writing 0xfe680004)
May 20 13:51:12 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0x5 (was 0x4, writing 0xfe6c0004)
May 20 13:51:12 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0x4 (was 0x1, writing 0xe001)
May 20 13:51:12 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0x3 (was 0x0, writing 0x10)
May 20 13:51:12 test1 kernel: pci-stub 0000:01:00.0: restoring config space at offset 0x1 (was 0x100000, writing 0x100403)
May 20 13:51:12 test1 qm[704676]: <root@pam> end task UPID:test1:000AC0A6:016313C0:519A0E2D:qmstart:100:root@pam: OK
May 20 13:51:12 test1 kernel: pci-stub 0000:01:00.0: irq 42 for MSI/MSI-X
May 20 13:51:13 test1 ntpd[2484]: Listen normally on 10 tap100i0 fe80::4cff:56ff:fe85:5a0d UDP 123
May 20 13:51:13 test1 ntpd[2484]: peers refreshed
May 20 13:51:20 test1 kernel: tap100i0: no IPv6 routers present
May 20 13:51:22 test1 kernel: __ratelimit: 55 callbacks suppressed
May 20 13:51:22 test1 kernel: kvm: 704685: cpu0 unhandled rdmsr: 0x345
May 20 13:51:22 test1 kernel: kvm: 704685: cpu0 unhandled wrmsr: 0x680 data 0
May 20 13:51:22 test1 kernel: kvm: 704685: cpu0 unhandled wrmsr: 0x6c0 data 0
May 20 13:51:22 test1 kernel: kvm: 704685: cpu0 unhandled wrmsr: 0x681 data 0
May 20 13:51:22 test1 kernel: kvm: 704685: cpu0 unhandled wrmsr: 0x6c1 data 0
May 20 13:51:22 test1 kernel: kvm: 704685: cpu0 unhandled wrmsr: 0x682 data 0
May 20 13:51:22 test1 kernel: kvm: 704685: cpu0 unhandled wrmsr: 0x6c2 data 0
May 20 13:51:22 test1 kernel: kvm: 704685: cpu0 unhandled wrmsr: 0x683 data 0
May 20 13:51:22 test1 kernel: kvm: 704685: cpu0 unhandled wrmsr: 0x6c3 data 0
May 20 13:51:22 test1 kernel: kvm: 704685: cpu0 unhandled wrmsr: 0x684 data 0
May 20 13:51:22 test1 kernel: pci-stub 0000:01:00.0: irq 42 for MSI/MSI-X
May 20 13:51:23 test1 kernel: pci-stub 0000:01:00.0: irq 42 for MSI/MSI-X
May 20 13:51:23 test1 kernel: pci-stub 0000:01:00.0: irq 42 for MSI/MSI-X

Code:
# pveversion -v
pve-manager: 3.0-17 (pve-manager/3.0/f8f25665)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-2
qemu-server: 3.0-12
pve-firmware: 1.0-22
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-6
vncterm: 1.1-3
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-12
ksm-control-daemon: 1.1-1

EDIT: some additions. When I start the VM via command line, I see the following output - this might have to do something with the logs.

Code:
# qm start 100
kvm: -device pci-assign,host=01:00.0,id=hostpci0,bus=pci.0,addr=0x10: Host-side INTx sharing not supported, using MSI instead
Some devices do not work properly in this mode.
kvm: -device pci-assign,host=01:00.0,id=hostpci0,bus=pci.0,addr=0x10: pci-assign: Cannot read from host /sys/bus/pci/devices/0000:01:00.0/rom
Device option ROM contents are probably invalid (check dmesg).
Skip option ROM probe with rombar=0, or load from file with romfile=
 
Last edited:
I've also got the same thing and my host has been rock solid for about a week or so until last night when it seems to have got into a mess during backups. I could RDP for instance to my DC though slow but DHCP wasn't issuing off it. My Router VM was none responsive, etc. I suspect that's not related to this however.

I'm passing through an Intel NIC directly to a KVM pfSense. Anyway here's my logs related to this...

Code:
May 20 16:01:09 proxmox kernel: assign device: host bdf = a:0:0
May 20 16:01:09 proxmox kernel: IRQ handler type mismatch for IRQ 16
May 20 16:01:09 proxmox kernel: current handler: uhci_hcd:usb3
May 20 16:01:09 proxmox kernel: Pid: 184645, comm: kvm veid: 0 Not tainted 2.6.32-20-pve #1
May 20 16:01:09 proxmox kernel: Call Trace:
May 20 16:01:09 proxmox kernel: [<ffffffff810e95c7>] ? __setup_irq+0x3e7/0x440
May 20 16:01:09 proxmox kernel: [<ffffffffa0588c90>] ? kvm_assigned_dev_intr+0x0/0xf0 [kvm]
May 20 16:01:09 proxmox kernel: [<ffffffff810e9704>] ? request_threaded_irq+0xe4/0x1e0
May 20 16:01:09 proxmox kernel: [<ffffffffa058e62d>] ? kvm_vm_ioctl+0x100d/0x10f0 [kvm]
May 20 16:01:09 proxmox kernel: [<ffffffff8142fd41>] ? pci_conf1_read+0xc1/0x120
May 20 16:01:09 proxmox kernel: [<ffffffff81431953>] ? raw_pci_read+0x23/0x40
May 20 16:01:09 proxmox kernel: [<ffffffff8128e7ba>] ? pci_read_config+0x25a/0x280
May 20 16:01:09 proxmox kernel: [<ffffffffa058caba>] ? kvm_dev_ioctl+0xaa/0x4c0 [kvm]
May 20 16:01:09 proxmox kernel: [<ffffffff811a6afa>] ? vfs_ioctl+0x2a/0xa0
May 20 16:01:09 proxmox kernel: [<ffffffff8120fee6>] ? read+0x166/0x210
May 20 16:01:09 proxmox kernel: [<ffffffff811a712e>] ? do_vfs_ioctl+0x7e/0x570
May 20 16:01:09 proxmox kernel: [<ffffffff811933b6>] ? vfs_read+0x116/0x190
May 20 16:01:09 proxmox kernel: [<ffffffff811a766f>] ? sys_ioctl+0x4f/0x80
May 20 16:01:09 proxmox kernel: [<ffffffff8100b182>] ? system_call_fastpath+0x16/0x1b
May 20 16:01:09 proxmox kernel: pci-stub 0000:0a:00.0: irq 28 for MSI/MSI-X
May 20 16:01:09 proxmox kernel: pci-stub 0000:0a:00.0: Invalid ROM contents
May 20 16:01:09 proxmox kernel: pci-stub 0000:0a:00.0: restoring config space at offset 0xf (was 0x100, writing 0x10b)
May 20 16:01:09 proxmox kernel: pci-stub 0000:0a:00.0: restoring config space at offset 0xc (was 0x0, writing 0xf8100001)
 
Well, what I gathered it's just a warning. IRQ sharing is superceded by MSI on PCI 2.2+ and PCIe, it should - in theory - not cause any problems. MSI does get used, and IRQ 42 is assigned to my VM, which seems to be working properly even under stress. It would be nice to have a confirmation from someone more eloquent on the subject. I'm not really comfortable with these warnings even if there're no apparent problems so far...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!