Host kernel messages when IO on VM

adamb

Famous Member
Mar 1, 2012
1,329
77
113
I am getting a number of host kernel messages when there is IO on a VM. The VM doesn't seem to be effected and continues to operate without any issues. Just has me concerned.

The cluster is utilizing a HP P2000 over SAS with multipathing. As far as I can tell multipathing is working correctly. I can pull 1 of the 2 path's and there is no interuption in IO. All seems to be well other than this issue.

Code:
Oct  7 15:35:46 proxmox1 kernel: ] ? warn_slowpath_cf8106] ? warn_slowpath_fmt+0x481469f20>] ? skb_gso_ffffff8146c6c9>] ? dev_hard_<ffffffff8148ac3a>] ? d0
Oct  7 15:35:46 proxmox1 kernel: ] ? warn_slowpath_cof810] ? warn_slowpath_fmt+0x481469f20>] ? skb_gso_ffffff8146c6c9>] ? dev_hard_s<ffffffff8148ac3a>] ?d0
Oct  7 15:35:49 proxmox1 kernel: ] ? warn_slowpath_coffff8106f866>] ? warn_s> [<ffffffff81469f20>]
Oct  7 15:35:49 proxmox1 kernel: ] ? warn_slowpath_coffff8106f866>] ? warn_> [<ffffffff81469f20>]
Oct  7 15:35:49 proxmox1 kernel: ] ? warn_slowpath_common+0x87866>] ? warn_slowpath_fff81469f20>] ? skb_gso_ [<ffffffff8146c6c9>610
Oct  7 15:35:49 proxmox1 kernel: ] ? warn_slowpath_common+0x8866>] ? warn_slowpath_fff81469f20>] ? skb_gso [<ffffffff8146c6c9>]610
Oct  7 15:35:49 proxmox1 kernel: ] ? warn_slowpath_common+0866>] ? warn_slowpath_fff81469f20>] ? skb_gso_ [<ffffffff8146c6c9>610
Oct  7 15:35:49 proxmox1 rgmanager[7403]: [pvevm] VM 100 is running
Oct  7 15:35:50 proxmox1 kernel: ] ? warn_slowpath_common+0x8866>] ? warn_slowpatfff81469f20>] ? skb_gs [<ffffffff8146c6c9>]610
Oct  7 15:35:50 proxmox1 kernel: ] ? warn_slowpath_common+0x8866>] ? warn_slowpatfff81469f20>] ? skb_g [<ffffffff8146c6c9>610
Oct  7 15:35:50 proxmox1 kernel: ] ? warn_slowpath_common+0x88fff81469f20>] ? skb_g [<ffffffff8146c6c9>]610
Oct  7 15:35:50 proxmox1 kernel: ] ? warn_slowpath_common+0x87/866>] ? warn_slowpatfff81469f20>] ? skb_gs [<ffffffff8146c6c9>]610
Oct  7 15:35:50 proxmox1 kernel: ] ? warn_slowpath_common+0x8866>] ? warn_slowpafff81469f20>] ? skb_gs [<ffffffff8146c6c9>610
Oct  7 15:35:52 proxmox1 kernel: ] ? warn_slowpath_cof810] ? warn_slowpath_fmt+0x46/0x581469f20>] ? skb_gso_seffffff8146c6c9>] ? dev_hard_start<ffffffff8148ac3a>] ?d0
Oct  7 15:35:58 proxmox1 kernel: ] ? warn_slowpath_cf8106f866>] ? warn_sfffff81469f20>] ? skb_g8146c6c9>] ? dev_hard_star [<ffffffff8148ac3a>] ? sch_direct_ffff8146cec8>] ? dev_queue_xmit+0xfff8150686c>] ? br_dev_queue_push_xmit+0x6cffff815068f8>] ? br_forwardffff81506bfb>] ? _984dc>] ? nf_hook_sloaf0>] ? br_handle_frf81506d8d>] ? br_forward+0x5d/0x70
Oct  7 15:35:59 proxmox1 rgmanager[7430]: [pvevm] VM 100 is running
Oct  7 15:36:02 proxmox1 kernel: ] ? warn_slowpf866>] ? warn_slowpath_fmt+0x46/0x>] ? skb_gso_segm>] ? dev_hard_8ac3a>] ? sch_direct_xmit+0x16a/0x1d0
Oct  7 15:36:02 proxmox1 kernel: ] ? war warn_slsegment+0x220/0x310
Oct  7 15:36:02 proxmox1 kernel: ] ? warn warn_slsegment+0x220/0x310
Oct  7 15:36:02 proxmox1 kernel: ] ? war warn_ssegment+0x220/0x310


Code:
root@proxmox1:/var/log# pveversion -v
proxmox-ve-2.6.32: 3.1-113 (running kernel: 2.6.32-25-pve)
pve-manager: 3.1-17 (running version: 3.1-17/eb90521d)
pve-kernel-2.6.32-25-pve: 2.6.32-113
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-7
qemu-server: 3.1-5
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-13
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2

Code:
defaults {
        polling_interval        2
        path_selector           "round-robin 0"
        path_grouping_policy    multibus
        getuid_callout          "/lib/udev/scsi_id -g -u -d /dev/%n"
        rr_min_io               100
        failback                immediate
        no_path_retry           queue
}


multipaths {
  multipath {
        wwid "3600c0ff00019cbcf36d34e5201000000"
        alias mpath0
  }
}


blacklist {
        wwid *
}


blacklist_exceptions {
        wwid "3600c0ff00019cbcf36d34e5201000000"
}

Code:
root@proxmox1:/var/log# multipath -ll
mpath0 (3600c0ff00019cbcf36d34e5201000000) dm-1 HP,P2000 G3 SAS
size=4.4T features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=130 status=active
| `- 1:0:1:1 sdc 8:32 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  `- 1:0:0:1 sdb 8:16 active ready running
 
Last edited:
Went to open a ticket on this one as I have a subscription but it doesn't look like my account has been approved yet, I only have access to the "General Requests".
 
UPDATE

This seems to be related to the network card. It is an HP 530T which is based on the broadcom chipset. It utilizes the bnx2x driver. I snagged the latest driver from HP's site and still have the issue (Older than current proxmox version). Just snagged the latest from broadcom's site and the issue seems to be resolved.

Proxmox - 1.74.22
HP - 1.74.20
Broadcom - 1.76.54

I only come across these errors when moving data to the VM. If I move data to the host itself over the same card, it has no issues.
 
Last edited:
HI,

I'm having the same problems as you had, my cards are NetXtreme II 10 Gigabit using chipset 57711E and driver bnx2x.

I can see the driver from the vendor, but there is only source or RPM ? How did you install the driver on your proxmox machine ?
 
HI,

I'm having the same problems as you had, my cards are NetXtreme II 10 Gigabit using chipset 57711E and driver bnx2x.

I can see the driver from the vendor, but there is only source or RPM ? How did you install the driver on your proxmox machine ?

Proxmox VE uses already 1.76.54 (bnx2x).
 
Code:
root@prox:~# pveversion -v
pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-23
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1

Maybe time to update to 3.1 ?
 
Last edited:
Code:
root@prox:~# pveversion -v
pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-23
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1

Maybe time to update to 3.1 ?

The update to 3.1 should fix you up. I have been running strong on 3.1 for a solid few weeks now.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!