Error in Proxmox Virtual Environment 5.0-31

Rais Ahmed

Active Member
Apr 14, 2017
50
4
28
37
Can anyone help me to figure it out, receiving below errors ... Please help

Sep 05 12:29:38 Server1 kernel: ------------[ cut here ]------------
Sep 05 12:29:38 Server1 kernel: WARNING: CPU: 64 PID: 70551 at net/core/dev.c:2576 skb_warn_bad_offload+0xd1/0x120
Sep 05 12:29:38 Server1 kernel: bond0: caps=(0x000000001fb97ba9, 0x0000000000000000) len=1671 data_len=1629 gso_size=1480 gso_type=6 ip_summed=0
Sep 05 12:29:38 Server1 kernel: Modules linked in: ip_set ip6table_filter ip6_tables ses enclosure dm_round_robin joydev input_leds hid_generic usbmouse usbkbd usbhid hid binfmt_misc iptable_filter 8021q garp mrp bonding softdog nfnetlink_log nfnetlink nls_iso8859_1 dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio libcrc32c ipmi_ssif intel_rapl sb_edac edac_core x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm mgag200 ttm drm_kms_helper drm i2c_algo_bit irqbypass fb_sys_fops crct10dif_pclmul crc32_pclmul syscopyarea ghash_clmulni_intel sysfillrect pcbc sysimgblt aesni_intel hpilo snd_pcm aes_x86_64 snd_timer crypto_simd glue_helper snd cryptd intel_cstate soundcore intel_rapl_perf shpchp ioatdma lpc_ich pcspkr ipmi_si ipmi_devintf ipmi_msghandler wmi acpi_power_meter mac_hid dm_multipath scsi_dh_rdac
Sep 05 12:29:38 Server1 kernel: scsi_dh_emc scsi_dh_alua vhost_net vhost macvtap macvlan ib_iser rdma_cm iw_cm ib_cm ib_core configfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi sunrpc ip_tables x_tables autofs4 btrfs xor raid6_pq i2c_i801 ixgbe(O) hpsa dca scsi_transport_sas ptp pps_core fjes
Sep 05 12:29:38 Server1 kernel: CPU: 64 PID: 70551 Comm: vhost-70429 Tainted: G W O 4.10.17-3-pve #1
Sep 05 12:29:38 Server1 kernel: Hardware name: HP ProLiant BL460c Gen9, BIOS I36 09/12/2016
Sep 05 12:29:38 Server1 kernel: Call Trace:
Sep 05 12:29:38 Server1 kernel: dump_stack+0x63/0x81
Sep 05 12:29:38 Server1 kernel: __warn+0xcb/0xf0
Sep 05 12:29:38 Server1 kernel: warn_slowpath_fmt+0x5f/0x80
Sep 05 12:29:38 Server1 kernel: skb_warn_bad_offload+0xd1/0x120
Sep 05 12:29:38 Server1 kernel: __skb_gso_segment+0x181/0x190
Sep 05 12:29:38 Server1 kernel: validate_xmit_skb+0x14f/0x2a0
Sep 05 12:29:38 Server1 kernel: __dev_queue_xmit+0x31a/0x680
Sep 05 12:29:38 Server1 kernel: dev_queue_xmit+0x10/0x20
Sep 05 12:29:38 Server1 kernel: vlan_dev_hard_start_xmit+0x98/0x120 [8021q]
Sep 05 12:29:38 Server1 kernel: dev_hard_start_xmit+0xa3/0x1f0
Sep 05 12:29:38 Server1 kernel: __dev_queue_xmit+0x5ae/0x680
Sep 05 12:29:38 Server1 kernel: dev_queue_xmit+0x10/0x20
Sep 05 12:29:38 Server1 kernel: br_dev_queue_push_xmit+0x7e/0x160
Sep 05 12:29:38 Server1 kernel: br_forward_finish+0x3d/0xb0
Sep 05 12:29:38 Server1 kernel: ? br_fdb_external_learn_del+0x120/0x120
Sep 05 12:29:38 Server1 kernel: __br_forward+0x14a/0x1e0
Sep 05 12:29:38 Server1 kernel: ? br_dev_queue_push_xmit+0x160/0x160
Sep 05 12:29:38 Server1 kernel: br_forward+0xa3/0xb0
Sep 05 12:29:38 Server1 kernel: br_handle_frame_finish+0x263/0x530
Sep 05 12:29:38 Server1 kernel: br_handle_frame+0x174/0x2d0
Sep 05 12:29:38 Server1 kernel: ? br_pass_frame_up+0x150/0x150
Sep 05 12:29:38 Server1 kernel: __netif_receive_skb_core+0x31d/0xa40
Sep 05 12:29:38 Server1 kernel: ? zerocopy_sg_from_iter+0xa7/0x1f0
Sep 05 12:29:38 Server1 kernel: __netif_receive_skb+0x18/0x60
Sep 05 12:29:38 Server1 kernel: netif_receive_skb_internal+0x32/0xa0
Sep 05 12:29:38 Server1 kernel: netif_receive_skb+0x1c/0x70
Sep 05 12:29:38 Server1 kernel: tun_get_user+0x425/0x800
Sep 05 12:29:38 Server1 kernel: tun_sendmsg+0x51/0x70
Sep 05 12:29:38 Server1 kernel: handle_tx+0x335/0x590 [vhost_net]
Sep 05 12:29:38 Server1 kernel: handle_tx_kick+0x15/0x20 [vhost_net]
Sep 05 12:29:38 Server1 kernel: vhost_worker+0x9e/0xf0 [vhost]
Sep 05 12:29:38 Server1 kernel: kthread+0x109/0x140
Sep 05 12:29:38 Server1 kernel: ? vhost_dev_init+0x270/0x270 [vhost]
Sep 05 12:29:38 Server1 kernel: ? kthread_create_on_node+0x60/0x60
Sep 05 12:29:38 Server1 kernel: ret_from_fork+0x2c/0x40
Sep 05 12:29:38 Server1 kernel: ---[ end trace cb9a5e89a46cbf2b ]---
 
i have cluster environment, dozens of VMs deployed on these cluster host. what will be the best way to update it without downtime.
 
Minor version upgrades should be possible without downtime, e.g.:

Code:
apt update
apt full-upgrade

If you are worried, and you already have a cluster set up, you can of course migrate all VMs away from the server you are updating and then move them back afterward. The cluster itself is compatible between minor versions, so your 5.4 machines will stay in the cluster, even if not all are updated at once.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!