Help with stack trace! iommu related

anaxagoras

Renowned Member
Aug 23, 2012
42
6
73
this happens any time i have set intel_iommu=on even if I have no pci devices passed through and the respective vm off, and pci pass through does work.

It happens consistently about 15-30 minutes after boot, while watching top i see some system processes on the host start to eat up over 100% cpu such as events, watchdog, apache2, the system becomes VERY SLOW for input. I will sometimes see softlockup errors on the console.

If i turn off intel_iommu via grub everything is fine.

Aug 23 00:29:22 proxmox kernel: ------------[ cut here ]------------
Aug 23 00:29:22 proxmox kernel: WARNING: at drivers/pci/intel-iommu.c:2775 intel_unmap_page+0x15f/0x180() (Not tainted)
Aug 23 00:29:22 proxmox kernel: Hardware name: X9DRi-LN4+/X9DR3-LN4+
Aug 23 00:29:22 proxmox kernel: Driver unmaps unmatched page at PFN 0
Aug 23 00:29:22 proxmox kernel: Modules linked in: ext4 jbd2 snd_pcsp snd_pcm i2c_i801 snd_timer i2c_core snd ioatdma sb_edac soundcore tpm_tis edac_core tpm shpchp
tpm_bios snd_page_alloc serio_raw ext3 jbd mbcache ses enclosure isci libsas igb dca ahci scsi_transport_sas [last unloaded: scsi_wait_scan]
Aug 23 00:29:22 proxmox kernel: Pid: 0, comm: swapper veid: 0 Not tainted 2.6.32-14-pve #1
Aug 23 00:29:22 proxmox kernel: Call Trace:
Aug 23 00:29:22 proxmox kernel: <IRQ> [<ffffffff8106c608>] ? warn_slowpath_common+0x88/0xc0
Aug 23 00:29:22 proxmox kernel: [<ffffffff8106c6f6>] ? warn_slowpath_fmt+0x46/0x50
Aug 23 00:29:22 proxmox kernel: [<ffffffff812a6bdb>] ? find_iova+0x5b/0x90
Aug 23 00:29:22 proxmox kernel: [<ffffffff812aae8f>] ? intel_unmap_page+0x15f/0x180
Aug 23 00:29:22 proxmox kernel: [<ffffffffa0031e47>] ? igb_poll+0x137/0x11c0 [igb]
Aug 23 00:29:22 proxmox kernel: [<ffffffff81081374>] ? mod_timer+0x144/0x230
Aug 23 00:29:22 proxmox kernel: [<ffffffff814ff71b>] ? br_multicast_send_query+0xeb/0x100
Aug 23 00:29:22 proxmox kernel: [<ffffffff8111dd86>] ? ctx_sched_in+0x246/0x320
Aug 23 00:29:22 proxmox kernel: [<ffffffff8145f2d3>] ? net_rx_action+0x103/0x2e0
Aug 23 00:29:22 proxmox kernel: [<ffffffff81075363>] ? __do_softirq+0x103/0x260
Aug 23 00:29:22 proxmox kernel: [<ffffffff8100c30c>] ? call_softirq+0x1c/0x30
Aug 23 00:29:22 proxmox kernel: [<ffffffff8100df35>] ? do_softirq+0x65/0xa0
Aug 23 00:29:22 proxmox kernel: [<ffffffff8107518d>] ? irq_exit+0xcd/0xd0
Aug 23 00:29:22 proxmox kernel: [<ffffffff8152dad5>] ? do_IRQ+0x75/0xf0
Aug 23 00:29:22 proxmox kernel: [<ffffffff8100bb13>] ? ret_from_intr+0x0/0x11
Aug 23 00:29:22 proxmox kernel: <EOI> [<ffffffff812d147e>] ? intel_idle+0xde/0x170
Aug 23 00:29:22 proxmox kernel: [<ffffffff812d1461>] ? intel_idle+0xc1/0x170
Aug 23 00:29:22 proxmox kernel: [<ffffffff814270b7>] ? cpuidle_idle_call+0xa7/0x140
Aug 23 00:29:22 proxmox kernel: [<ffffffff81009e63>] ? cpu_idle+0xb3/0x110
Aug 23 00:29:22 proxmox kernel: [<ffffffff8150c915>] ? rest_init+0x85/0x90
Aug 23 00:29:22 proxmox kernel: [<ffffffff81c2df6e>] ? start_kernel+0x412/0x41e
Aug 23 00:29:22 proxmox kernel: [<ffffffff81c2d33a>] ? x86_64_start_reservations+0x125/0x129
Aug 23 00:29:22 proxmox kernel: [<ffffffff81c2d438>] ? x86_64_start_kernel+0xfa/0x109
Aug 23 00:29:22 proxmox kernel: ---[ end trace 5c747b80ffafdf1e ]---
 
I'm in the process of running memtest86 right now to check for bad memory, once it's done i'll run that comman, hopefully tonight. Less than a week ago i did an "aptitutde update" and an "aptitude full-upgrade" I think it's on kernel 2.6.32-14.
 
root@proxmox:~# pveversion -v
pve-manager: 2.1-14 (pve-manager/2.1/f32f3f46)
running kernel: 2.6.32-14-pve
proxmox-ve-2.6.32: 2.1-74
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-14-pve: 2.6.32-74
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.92-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.8-1
pve-cluster: 1.0-27
qemu-server: 2.0-49
pve-firmware: 1.0-18
libpve-common-perl: 1.0-30
libpve-access-control: 1.0-24
libpve-storage-perl: 2.0-30
vncterm: 1.0-3
vzctl: 3.0.30-2pve5
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.1-8
ksm-control-daemon: 1.1-1

root@proxmox:~# uname -a
Linux proxmox 2.6.32-14-pve #1 SMP Tue Aug 21 08:24:37 CEST 2012 x86_64 GNU/Linux
 
system is VERY sluggish again, but nothing is running, load averages high, no iowait. apache2 was running and eating about 50% cpu, i killed it. I don't understand how the system is idle, but has high load averages.


ps sorted by cpu usage:

# ps -e -o pcpu,cpu,nice,state,cputime,args --sort pcpu | sed "/^ 0.0 /d"
%CPU CPU NI S TIME COMMAND
0.1 - 0 S 00:00:16 sshd: root@pts/2
0.1 - 0 S 00:00:19 pvestatd
0.1 - 0 S 00:00:01 pvedaemon worker
0.1 - 0 S 00:00:47 [kblockd/0]
0.1 - 0 S 00:00:03 pvedaemon worker
0.1 - 0 S 00:00:02 pvedaemon worker
0.1 - 0 S 00:00:56 [ksoftirqd/0]
0.4 - 0 R 00:02:19 [events/0]
0.6 - 0 S 00:00:00 /usr/bin/mandb --quiet
0.7 - - S 00:03:38 [migration/0]
1.3 - - S 00:06:15 [watchdog/0]


# w
06:48:40 up 7:57, 3 users, load average: 10.09, 14.87, 8.82
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root tty1 22:58 6:29m 0.22s 0.21s -bash
root pts/0 192.168.102.174 22:59 0.00s 0.90s 0.00s w
root pts/2 nikos-x220 02:21 23.00s 0.32s 0.32s -bash


# iostat
Linux 2.6.32-14-pve (proxmox) 08/26/2012 _x86_64_ (12 CPU)


avg-cpu: %user %nice %system %iowait %steal %idle
0.83 0.00 0.15 0.47 0.00 98.54


Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sdb 0.03 0.30 0.00 8668 0
sda 23.27 95.40 806.94 2735268 23135544
dm-0 28.06 47.32 217.22 1356762 6227944
dm-1 0.01 0.05 0.00 1320 0
dm-2 75.61 47.91 589.71 1373750 16907624
 
Similar problem

i think i have similar problem
hardware:
PHP:
Supermicro X8DTi-LN4F
Xeon L5520
Intel® 82576 Dual-Port Gigabit Ethernet Controller
software:
PHP:
# pveversion -v
pve-manager: 2.1-14 (pve-manager/2.1/f32f3f46)
running kernel: 2.6.32-14-pve
proxmox-ve-2.6.32: 2.1-74
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-14-pve: 2.6.32-74
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.92-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.8-1
pve-cluster: 1.0-27
qemu-server: 2.0-49
pve-firmware: 1.0-18
libpve-common-perl: 1.0-30
libpve-access-control: 1.0-24
libpve-storage-perl: 2.0-30
vncterm: 1.0-3
vzctl: 3.0.30-2pve5
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.1-8
ksm-control-daemon: 1.1-1
PHP:
# dmesg | grep -e DMAR -e IOMMU
ACPI: DMAR 000000007f75e0d0 00138 (v01    AMI  OEMDMAR 00000001 MSFT 00000097)
Intel-IOMMU: enabled
DMAR: Host address width 40
DMAR: DRHD base: 0x000000fbffe000 flags: 0x1
IOMMU fbffe000: ver 1:0 cap c90780106f0462 ecap f020fe
DMAR: RMRR base: 0x000000000e6000 end: 0x000000000e9fff
DMAR: RMRR base: 0x0000007f7ec000 end: 0x0000007f7fffff
DMAR: ATSR flags: 0x0
IOMMU 0xfbffe000: using Queued invalidation
IOMMU: Setting RMRR:
IOMMU: Setting identity map for device 0000:00:1d.0 [0x7f7ec000 - 0x7f800000]
IOMMU: Setting identity map for device 0000:00:1d.1 [0x7f7ec000 - 0x7f800000]
IOMMU: Setting identity map for device 0000:00:1d.2 [0x7f7ec000 - 0x7f800000]
IOMMU: Setting identity map for device 0000:00:1d.7 [0x7f7ec000 - 0x7f800000]
IOMMU: Setting identity map for device 0000:00:1a.0 [0x7f7ec000 - 0x7f800000]
IOMMU: Setting identity map for device 0000:00:1a.1 [0x7f7ec000 - 0x7f800000]
IOMMU: Setting identity map for device 0000:00:1a.2 [0x7f7ec000 - 0x7f800000]
IOMMU: Setting identity map for device 0000:00:1a.7 [0x7f7ec000 - 0x7f800000]
IOMMU: Setting identity map for device 0000:00:1d.0 [0xe6000 - 0xea000]
IOMMU: Setting identity map for device 0000:00:1d.1 [0xe6000 - 0xea000]
IOMMU: Setting identity map for device 0000:00:1d.2 [0xe6000 - 0xea000]
IOMMU: Setting identity map for device 0000:00:1d.7 [0xe6000 - 0xea000]
IOMMU: Setting identity map for device 0000:00:1a.0 [0xe6000 - 0xea000]
IOMMU: Setting identity map for device 0000:00:1a.1 [0xe6000 - 0xea000]
IOMMU: Setting identity map for device 0000:00:1a.2 [0xe6000 - 0xea000]
IOMMU: Setting identity map for device 0000:00:1a.7 [0xe6000 - 0xea000]
IOMMU: Prepare 0-16MiB unity mapping for LPC
IOMMU: Setting identity map for device 0000:00:1f.0 [0x0 - 0x1000000]
syslog:
PHP:
Aug 27 02:33:18 proxmox kernel: ------------[ cut here ]------------
Aug 27 02:33:18 proxmox kernel: WARNING: at drivers/pci/intel-iommu.c:2775 intel_unmap_page+0x15f/0x180() (Not tainted)
Aug 27 02:33:18 proxmox kernel: Hardware name: X8DT3
Aug 27 02:33:18 proxmox kernel: Driver unmaps unmatched page at PFN 0
Aug 27 02:33:18 proxmox kernel: Modules linked in: ib_core ib_addr ipv6 iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi fuse snd_pcsp snd_pcm snd_timer tpm_tis snd tpm i7core_edac tpm_bios serio_raw soundcore i2c_i801 edac_core snd_page_alloc ioatdma i2c_core ext4 mbcache jbd2 ata_generic pata_acpi ata_piix igb dca [last unloaded: scsi_wait_scan]
Aug 27 02:33:18 proxmox kernel: Pid: 0, comm: swapper veid: 0 Not tainted 2.6.32-14-pve #1
Aug 27 02:33:18 proxmox kernel: Call Trace:
Aug 27 02:33:18 proxmox kernel: <IRQ>  [<ffffffff8106c608>] ? warn_slowpath_common+0x88/0xc0
Aug 27 02:33:18 proxmox kernel: [<ffffffff8106c6f6>] ? warn_slowpath_fmt+0x46/0x50
Aug 27 02:33:18 proxmox kernel: [<ffffffff812a6bdb>] ? find_iova+0x5b/0x90
Aug 27 02:33:18 proxmox kernel: [<ffffffff812aae8f>] ? intel_unmap_page+0x15f/0x180
Aug 27 02:33:18 proxmox kernel: [<ffffffffa000de47>] ? igb_poll+0x137/0x11c0 [igb]
Aug 27 02:33:18 proxmox kernel: [<ffffffff81063a66>] ? rebalance_domains+0x1a6/0x5b0
Aug 27 02:33:18 proxmox kernel: [<ffffffff8111dd86>] ? ctx_sched_in+0x246/0x320
Aug 27 02:33:18 proxmox kernel: [<ffffffff810ef767>] ? cpu_quiet_msk+0x77/0x130
Aug 27 02:33:18 proxmox kernel: [<ffffffff8145f2d3>] ? net_rx_action+0x103/0x2e0
Aug 27 02:33:18 proxmox kernel: [<ffffffff81075363>] ? __do_softirq+0x103/0x260
Aug 27 02:33:18 proxmox kernel: [<ffffffff810a5a38>] ? tick_dev_program_event+0x68/0xd0
Aug 27 02:33:18 proxmox kernel: [<ffffffff8109a7a0>] ? hrtimer_interrupt+0x140/0x250
Aug 27 02:33:18 proxmox kernel: [<ffffffff8100c30c>] ? call_softirq+0x1c/0x30
Aug 27 02:33:18 proxmox kernel: [<ffffffff8100df35>] ? do_softirq+0x65/0xa0
Aug 27 02:33:18 proxmox kernel: [<ffffffff8107518d>] ? irq_exit+0xcd/0xd0
Aug 27 02:33:18 proxmox kernel: [<ffffffff8152dbc0>] ? smp_apic_timer_interrupt+0x70/0x9b
Aug 27 02:33:18 proxmox kernel: [<ffffffff8100bcd3>] ? apic_timer_interrupt+0x13/0x20
Aug 27 02:33:18 proxmox kernel: <EOI>  [<ffffffff812d147e>] ? intel_idle+0xde/0x170
Aug 27 02:33:18 proxmox kernel: [<ffffffff812d1461>] ? intel_idle+0xc1/0x170
Aug 27 02:33:18 proxmox kernel: [<ffffffff8109cf4d>] ? sched_clock_cpu+0xcd/0x110
Aug 27 02:33:18 proxmox kernel: [<ffffffff814270b7>] ? cpuidle_idle_call+0xa7/0x140
Aug 27 02:33:18 proxmox kernel: [<ffffffff81009e63>] ? cpu_idle+0xb3/0x110
Aug 27 02:33:18 proxmox kernel: [<ffffffff8150c915>] ? rest_init+0x85/0x90
Aug 27 02:33:18 proxmox kernel: [<ffffffff81c2df6e>] ? start_kernel+0x412/0x41e
Aug 27 02:33:18 proxmox kernel: [<ffffffff81c2d33a>] ? x86_64_start_reservations+0x125/0x129
Aug 27 02:33:18 proxmox kernel: [<ffffffff81c2d438>] ? x86_64_start_kernel+0xfa/0x109
Aug 27 02:33:18 proxmox kernel: ---[ end trace 660dd6008ed2e052 ]---

starting vm
PHP:
# qm start 100
pci-assign: Cannot read from host /sys/bus/pci/devices/0000:04:00.0/rom
    Device option ROM contents are probably invalid (check dmesg).
    Skip option ROM probe with rombar=0, or load from file with romfile=

inside vm i can see the nic (igb0)
 
Last edited:
Re: Similar problem

if i run 2.6.32-11 kernel then syslog is clean
PHP:
# pveversion -v
pve-manager: 2.1-14 (pve-manager/2.1/f32f3f46)
running kernel: 2.6.32-11-pve
proxmox-ve-2.6.32: 2.1-74
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-14-pve: 2.6.32-74
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.92-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.8-1
pve-cluster: 1.0-27
qemu-server: 2.0-49
pve-firmware: 1.0-18
libpve-common-perl: 1.0-30
libpve-access-control: 1.0-24
libpve-storage-perl: 2.0-30
vncterm: 1.0-3
vzctl: 3.0.30-2pve5
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.1-8
ksm-control-daemon: 1.1-1
with 2.6.32-11-pve kernel first boot message i get is:
APEI: Can not request iomem region <000000007f7741ea-000000007f7741ec> for GARs.

with 2.6.32-14-pve kernel first boot message i get is:
ERST: Failed to get Error Log Address Range.
 
Last edited:
Re: Similar problem

Hello!

I can confirm this Problem.

Code:
 pveversion -vpve-manager: 2.1-14 (pve-manager/2.1/f32f3f46)
running kernel: 2.6.32-14-pve
proxmox-ve-2.6.32: 2.1-74
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-14-pve: 2.6.32-74
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.92-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.8-1
pve-cluster: 1.0-27
qemu-server: 2.0-49
pve-firmware: 1.0-18
libpve-common-perl: 1.0-30
libpve-access-control: 1.0-24
libpve-storage-perl: 2.0-30
vncterm: 1.0-3
vzctl: 3.0.30-2pve5
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.1-8
ksm-control-daemon: 1.1-1

on:
Code:
[COLOR=#0000BB][FONT=monospace]Supermicro [/FONT][/COLOR]X9DR7-JLN4F
ethernet-controller-i350

Code:
igb: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Nonevmbr76: port 1(eth0) entering forwarding state
------------[ cut here ]------------
WARNING: at drivers/pci/intel-iommu.c:2775 intel_unmap_page+0x15f/0x180() (Not tainted)
Hardware name: X9DR7/E-LN4F
Driver unmaps unmatched page at PFN 0
Modules linked in: kvm_intel kvm i2c_i801 tpm_tis snd_pcsp tpm tpm_bios i2c_core snd_pcm snd_timer ioatdma sb_edac snd edac_core soundcore snd_page_alloc ext3 jbd mbcache isci sg libsas igb ahci mpt2sas dca arcmsr scsi_transport_sas raid_class [last unloaded: kvm_intel]
Pid: 0, comm: swapper veid: 0 Not tainted 2.6.32-14-pve #1
Call Trace:
 <IRQ>  [<ffffffff8106c608>] ? warn_slowpath_common+0x88/0xc0
 [<ffffffff8106c6f6>] ? warn_slowpath_fmt+0x46/0x50
 [<ffffffff812a6bdb>] ? find_iova+0x5b/0x90
 [<ffffffff812aae8f>] ? intel_unmap_page+0x15f/0x180
 [<ffffffffa0079e47>] ? igb_poll+0x137/0x11c0 [igb]
 [<ffffffff81081374>] ? mod_timer+0x144/0x230
 [<ffffffff814ff71b>] ? br_multicast_send_query+0xeb/0x100
 [<ffffffff8111dd86>] ? ctx_sched_in+0x246/0x320
 [<ffffffff8145f2d3>] ? net_rx_action+0x103/0x2e0
 [<ffffffff81075363>] ? __do_softirq+0x103/0x260
 [<ffffffff8100c30c>] ? call_softirq+0x1c/0x30
 [<ffffffff8100df35>] ? do_softirq+0x65/0xa0
 [<ffffffff8107518d>] ? irq_exit+0xcd/0xd0
 [<ffffffff8152dad5>] ? do_IRQ+0x75/0xf0
 [<ffffffff8100bb13>] ? ret_from_intr+0x0/0x11
 <EOI>  [<ffffffff81061c8f>] ? finish_task_switch+0x4f/0xf0
 [<ffffffff815256e6>] ? thread_return+0x4e/0x7c8
 [<ffffffff810a6958>] ? tick_nohz_stop_sched_tick+0x2b8/0x3e0
 [<ffffffff81009e9b>] ? cpu_idle+0xeb/0x110
 [<ffffffff8150c915>] ? rest_init+0x85/0x90
 [<ffffffff81c2df6e>] ? start_kernel+0x412/0x41e
 [<ffffffff81c2d33a>] ? x86_64_start_reservations+0x125/0x129
 [<ffffffff81c2d438>] ? x86_64_start_kernel+0xfa/0x109
---[ end trace 8c421b7ac01f804f ]---
Loading iSCSI transport class v2.0-870.
fuse init (API version 7.13)
iscsi: registered transport (tcp)

Look at this Thread http://forum.proxmox.com/archive/index.php/t-10004.html

The igb Driver updatet to version 3.4.8

After some time the server is slow. If I unload the igb driver of the server back to normal soon.



Excuse my bad english.

 
Re: Similar problem

I also have been hit by this bug. Kernel emitted:

Sep 27 11:36:07 proxmox1 kernel: WARNING: at drivers/pci/intel-iommu.c:2775 intel_unmap_page+0x15f/0x180() (Not tainted)
Sep 27 11:36:07 proxmox1 kernel: Hardware name: PowerEdge R510
Sep 27 11:36:07 proxmox1 kernel: Driver unmaps unmatched page at PFN 0
Sep 27 11:36:07 proxmox1 kernel: Modules linked in: kvm_intel kvm sg snd_pcsp snd_pcm shpchp snd_timer snd tpm_tis dcdbas soundcore tpm i7core_edac tpm_bios snd_page_alloc ioatdma serio_raw edac_core power_meter usb_storage ext4 mbcache jbd2 ses enclosure megaraid_sas igb dca bnx2 [last unloaded: scsi_wait_scan]
Sep 27 11:36:07 proxmox1 kernel: Pid: 0, comm: swapper veid: 0 Not tainted 2.6.32-14-pve #1
Sep 27 11:36:07 proxmox1 kernel: Call Trace:
Sep 27 11:36:07 proxmox1 kernel: <IRQ> [<ffffffff8106c608>] ? warn_slowpath_common+0x88/0xc0
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff8106c6f6>] ? warn_slowpath_fmt+0x46/0x50
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff812a6bdb>] ? find_iova+0x5b/0x90
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff812aae8f>] ? intel_unmap_page+0x15f/0x180
Sep 27 11:36:07 proxmox1 kernel: [<ffffffffa0029e47>] ? igb_poll+0x137/0x11c0 [igb]
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff8101cb74>] ? x86_pmu_enable+0x114/0x280
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff811190fb>] ? perf_pmu_enable+0x2b/0x40
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff8111e898>] ? perf_event_task_tick+0xa8/0x2f0
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff8145f2d3>] ? net_rx_action+0x103/0x2e0
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff81075363>] ? __do_softirq+0x103/0x260
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff8100c30c>] ? call_softirq+0x1c/0x30
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff8100df35>] ? do_softirq+0x65/0xa0
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff8107518d>] ? irq_exit+0xcd/0xd0
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff8152dad5>] ? do_IRQ+0x75/0xf0
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff8100bb13>] ? ret_from_intr+0x0/0x11
Sep 27 11:36:07 proxmox1 kernel: <EOI> [<ffffffff812d147e>] ? intel_idle+0xde/0x170
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff812d1461>] ? intel_idle+0xc1/0x170
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff8109cf4d>] ? sched_clock_cpu+0xcd/0x110
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff814270b7>] ? cpuidle_idle_call+0xa7/0x140
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff81009e63>] ? cpu_idle+0xb3/0x110
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff8150c915>] ? rest_init+0x85/0x90
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff81c2df6e>] ? start_kernel+0x412/0x41e
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff81c2d33a>] ? x86_64_start_reservations+0x125/0x129
Sep 27 11:36:07 proxmox1 kernel: [<ffffffff81c2d438>] ? x86_64_start_kernel+0xfa/0x109

with iommu_intel=on after enabling SR-IOV in BIOS.

root@proxmox1:~# pveversion -v
pve-manager: 2.1-14 (pve-manager/2.1/f32f3f46)
running kernel: 2.6.32-12-pve
proxmox-ve-2.6.32: 2.1-73
pve-kernel-2.6.32-12-pve: 2.6.32-68
pve-kernel-2.6.32-14-pve: 2.6.32-73
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.92-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.8-1
pve-cluster: 1.0-27
qemu-server: 2.0-49
pve-firmware: 1.0-18
libpve-common-perl: 1.0-30
libpve-access-control: 1.0-24
libpve-storage-perl: 2.0-30
vncterm: 1.0-2
vzctl: 3.0.30-2pve5
vzprocps: not correctly installed
vzquota: 3.0.12-3
pve-qemu-kvm: 1.1-8
ksm-control-daemon: 1.1-1

---------------------
Main System Chassis
---------------------
Chassis Information
Chassis Model : PowerEdge R510
System Revision : II

[cut]
Processor 1
Processor Brand : Intel(R) Xeon(R) CPU E5645 @ 2.40GHz
Processor Version : Model 44 Stepping 2
Voltage : 1200 mV

The system locked up completely after approx 3 hours of working. The workaround was to use the 2.6.32-12-pve kernel.
 
Re: Similar problem

Hi, I upgraded and booted a while ago. The new kernel also emmitted the oops:

WARNING: at drivers/pci/intel-iommu.c:2775 intel_unmap_page+0x15f/0x180() (Not tainted)
Hardware name: PowerEdge R510
Driver unmaps unmatched page at PFN 0
Modules linked in: sg kvm_intel kvm usb_storage snd_pcsp shpchp tpm_tis snd_pcm tpm snd_timer i7core_edac serio_raw dcdbas ioatdma tpm_bios edac_core snd power_meter soundcore snd_page_alloc ext4 mbcache jbd2 ses enclosure igb dca megaraid_sas bnx2 [last unloaded: scsi_wait_scan]
Pid: 0, comm: swapper veid: 0 Not tainted 2.6.32-15-pve #1
Call Trace:
<IRQ> [<ffffffff8106c608>] ? warn_slowpath_common+0x88/0xc0
[<ffffffff8106c6f6>] ? warn_slowpath_fmt+0x46/0x50
[<ffffffff812a77db>] ? find_iova+0x5b/0x90
[<ffffffff812aba8f>] ? intel_unmap_page+0x15f/0x180
[<ffffffffa0044e47>] ? igb_poll+0x137/0x11c0 [igb]
[<ffffffff81081424>] ? mod_timer+0x144/0x230
[<ffffffff81500b2b>] ? br_multicast_send_query+0xeb/0x100
[<ffffffff8111dd86>] ? group_sched_in+0x96/0x170
[<ffffffff814603b3>] ? net_rx_action+0x103/0x2e0
[<ffffffff81075413>] ? __do_softirq+0x103/0x260
[<ffffffff8100c30c>] ? call_softirq+0x1c/0x30
[<ffffffff8100df35>] ? do_softirq+0x65/0xa0
[<ffffffff8107523d>] ? irq_exit+0xcd/0xd0
[<ffffffff8152eef5>] ? do_IRQ+0x75/0xf0
[<ffffffff8100bb13>] ? ret_from_intr+0x0/0x11
<EOI> [<ffffffff812d207e>] ? intel_idle+0xde/0x170
[<ffffffff812d2061>] ? intel_idle+0xc1/0x170
[<ffffffff8109d04d>] ? sched_clock_cpu+0xcd/0x110
[<ffffffff81427d17>] ? cpuidle_idle_call+0xa7/0x140
[<ffffffff81009e63>] ? cpu_idle+0xb3/0x110
[<ffffffff8150dd35>] ? rest_init+0x85/0x90
[<ffffffff81c2df6e>] ? start_kernel+0x412/0x41e
[<ffffffff81c2d33a>] ? x86_64_start_reservations+0x125/0x129
[<ffffffff81c2d438>] ? x86_64_start_kernel+0xfa/0x109

pveversion -v
pve-manager: 2.1-14 (pve-manager/2.1/f32f3f46)
running kernel: 2.6.32-15-pve
proxmox-ve-2.6.32: 2.1-74
pve-kernel-2.6.32-12-pve: 2.6.32-68
pve-kernel-2.6.32-14-pve: 2.6.32-74
pve-kernel-2.6.32-15-pve: 2.6.32-78
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.92-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.8-1
pve-cluster: 1.0-27
qemu-server: 2.0-49
pve-firmware: 1.0-19
libpve-common-perl: 1.0-30
libpve-access-control: 1.0-24
libpve-storage-perl: 2.0-31
vncterm: 1.0-3
vzctl: 3.0.30-2pve5
vzprocps: not correctly installed
vzquota: 3.0.12-3
pve-qemu-kvm: 1.1-8
ksm-control-daemon: 1.1-1

Waiting to see if the lockup also occurs.
 
Re: Similar problem

The system froze with the 2.6.32-15 kernel as well. Any more info I could provide? I could try to build the kernel with the patch from upstream (https://bugzilla.redhat.com/show_bug.cgi?id=815998), but I am unsure if there is a source package for the pve- kernel. Also the server is in production so It would be nice if anyone could confirm that the upstream patch might fix this.
 
Re: Similar problem

The redhat bugzilla entry (https://bugzilla.redhat.com/show_bug.cgi?id=815998) has status set to "CLOSED DUPLICATE of bug 827193" , so I thought that there is a solution after all. The problem is that the bug mentioned seems to be private. Not sure if it's only for RH subscribed accounts or it's private because it contains customer's data. Is it likely that RH marked a bug private, so that only it's subscribed users can see it?
 
Re: Similar problem

According to a comment by a redhat engineer it seems that the problem (or something quite similar) has been addressed by Red Hat in their kernel-2.6.32-298.el6 . OpenVZ kernels (which I presume pve- kernels are based on) are still using the 279.9.1 build. How does one build a custom kernel given for pve? Or do I have to wait at least until OpenVZ guys rebase to the new kernel? I am not using openvz at all and I am not sure I ever will, all my VMs are under kvm. I just want to make sure that the bug is gone with the new version...
 
Re: Similar problem

According to a comment by a redhat engineer it seems that the problem (or something quite similar) has been addressed by Red Hat in their kernel-2.6.32-298.el6 . OpenVZ kernels (which I presume pve- kernels are based on) are still using the 279.9.1 build. How does one build a custom kernel given for pve? Or do I have to wait at least until OpenVZ guys rebase to the new kernel? I am not using openvz at all and I am not sure I ever will, all my VMs are under kvm. I just want to make sure that the bug is gone with the new version...


our latest kernel is based on 2.6.32-279.9.1.el6 (2.6.32-16)
 
Re: Similar problem

Still actual. IBM x3550m2. intel_iommu=on

dmesg:
Code:
device eth0 entered promiscuous mode
  alloc irq_desc for 65 on node -1
  alloc kstat_irqs on node -1
bnx2 0000:0b:00.0: irq 65 for MSI/MSI-X
  alloc irq_desc for 66 on node -1
  alloc kstat_irqs on node -1
bnx2 0000:0b:00.0: irq 66 for MSI/MSI-X
  alloc irq_desc for 67 on node -1
  alloc kstat_irqs on node -1
bnx2 0000:0b:00.0: irq 67 for MSI/MSI-X
  alloc irq_desc for 68 on node -1
  alloc kstat_irqs on node -1
bnx2 0000:0b:00.0: irq 68 for MSI/MSI-X
  alloc irq_desc for 69 on node -1
  alloc kstat_irqs on node -1
bnx2 0000:0b:00.0: irq 69 for MSI/MSI-X
  alloc irq_desc for 70 on node -1
  alloc kstat_irqs on node -1
bnx2 0000:0b:00.0: irq 70 for MSI/MSI-X
  alloc irq_desc for 71 on node -1
  alloc kstat_irqs on node -1
bnx2 0000:0b:00.0: irq 71 for MSI/MSI-X
  alloc irq_desc for 72 on node -1
  alloc kstat_irqs on node -1
bnx2 0000:0b:00.0: irq 72 for MSI/MSI-X
  alloc irq_desc for 73 on node -1
  alloc kstat_irqs on node -1
bnx2 0000:0b:00.0: irq 73 for MSI/MSI-X
bnx2 0000:0b:00.0: eth0: using MSIX
bnx2 0000:0b:00.0: eth0: NIC Copper Link is Up, 100 Mbps full duplex, receive & transmit flow control ON
vmbr0: port 1(eth0) entering forwarding state
------------[ cut here ]------------
WARNING: at drivers/pci/intel-iommu.c:2775 intel_unmap_page+0x15f/0x180() (Not tainted)
Hardware name: System x3550 M2 -[794622G]-
Driver unmaps unmatched page at PFN 0
Modules linked in: ext4 jbd2 ipmi_si ipmi_devintf ipmi_msghandler snd_pcsp cdc_ether usbnet i7core_edac i2c_i801 snd_pcm snd_timer snd soundcore snd_page_alloc shpchp mii i2c_core ioatdma edac_core tpm_tis tpm serio_raw tpm_bios ext3 jbd mbcache sg ata_generic mpt2sas pata_acpi scsi_transport_sas ata_piix igb megaraid_sas bnx2 raid_class dca [last unloaded: scsi_wait_scan]
Pid: 0, comm: swapper veid: 0 Not tainted 2.6.32-18-pve #1
Call Trace:
 <IRQ>  [<ffffffff8106d228>] ? warn_slowpath_common+0x88/0xc0
 [<ffffffff8106d316>] ? warn_slowpath_fmt+0x46/0x50
 [<ffffffff812a6cdb>] ? find_iova+0x5b/0x90
 [<ffffffff812aaf1f>] ? intel_unmap_page+0x15f/0x180
 [<ffffffffa001ba65>] ? bnx2_poll_work+0x155/0x11d0 [bnx2]
 [<ffffffff810eac20>] ? handle_IRQ_event+0x60/0x170
 [<ffffffff810ed308>] ? handle_edge_irq+0x98/0x180
 [<ffffffff8111dd86>] ? __perf_event_task_sched_out+0x26/0x2a0
 [<ffffffffa001cb1d>] ? bnx2_poll_msix+0x3d/0xd0 [bnx2]
 [<ffffffff81457aa3>] ? net_rx_action+0x103/0x2e0
 [<ffffffff81076063>] ? __do_softirq+0x103/0x260
 [<ffffffff8100c2ac>] ? call_softirq+0x1c/0x30
 [<ffffffff8100def5>] ? do_softirq+0x65/0xa0
 [<ffffffff81075e8d>] ? irq_exit+0xcd/0xd0
 [<ffffffff81524935>] ? do_IRQ+0x75/0xf0
 [<ffffffff8100ba93>] ? ret_from_intr+0x0/0x11
 <EOI>  [<ffffffff812d0e7e>] ? intel_idle+0xde/0x170
 [<ffffffff812d0e61>] ? intel_idle+0xc1/0x170
 [<ffffffff8109deed>] ? sched_clock_cpu+0xcd/0x110
 [<ffffffff81420387>] ? cpuidle_idle_call+0xa7/0x140
 [<ffffffff8100a023>] ? cpu_idle+0xb3/0x110
 [<ffffffff81503e35>] ? rest_init+0x85/0x90
 [<ffffffff81c2ef6e>] ? start_kernel+0x412/0x41e
 [<ffffffff81c2e33a>] ? x86_64_start_reservations+0x125/0x129
 [<ffffffff81c2e438>] ? x86_64_start_kernel+0xfa/0x109
---[ end trace 6cdb8efebe033148 ]---

Installed 2.3 and updated from apt:
Code:
# pveversion -v
pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-18-pve
proxmox-ve-2.6.32: 2.3-88
pve-kernel-2.6.32-18-pve: 2.6.32-88
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-18
pve-firmware: 1.0-21
libpve-common-perl: 1.0-48
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-6
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-8
ksm-control-daemon: 1.1-1

PS: fallback to 2.6.32-17-pve helped me.
 
Last edited:
Re: Similar problem

PS: fallback to 2.6.32-17-pve helped me.

I would presume that this issue will be fixed when the RHEL 6.4 kernel flows downstream (through OpenVZ) to proxmox. I'm still running on 2.6.32-12-pve, but I'll check -17 to see if it fixes the issue for me. If so, perhaps it will be possible to see what caused the regression from -17 to -18?
 
Re: Similar problem

We had intel_iommu=on set in /etc/default/grub per the PCI passthrough docs and it was basically stable on pve-16. Upgrading to pve-19 caused kernel oops, high load average for no apparent reason and panics. removing the option returned the system to normal. System is ProLiant DL380 G7.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!