Proxmox error log

Rais Ahmed

Active Member
Apr 14, 2017
50
4
28
37
Hello guys,
i am receiving attached error logs on one of my proxmox host. i have check it via migrating all the VM's to seperate host and same errors was generated on different host. after finding now i know that these logs are due to only 1 VM's machine. i am not able to diagnose it further why this is happening. please help
 

Attachments

  • proxmox error.txt
    48.4 KB · Views: 18
Thankyou for your reply, i have found out that below logs are also relates to this. what are the reasons and solution for this please help

Aug 7 12:08:21 server1 qm[71978]: <root@pam> starting task UPID:TES-HS03:0001192F:107E79E0:5B694565:qmstart:108:root@pam:
Aug 7 12:08:21 server1 qm[71983]: start VM 108: UPID:TES-HS03:0001192F:107E79E0:5B694565:qmstart:108:root@pam:
Aug 7 12:08:21 server1 systemd[1]: Started 108.scope.
Aug 7 12:08:21 server1 systemd-udevd[72021]: Could not generate persistent MAC address for tap108i0: No such file or directory
Aug 7 12:08:22 server1 kernel: [2767128.520204] device tap108i0 entered promiscuous mode
Aug 7 12:08:22 server1 kernel: [2767128.531425] vmbr103: port 3(tap108i0) entered blocking state
Aug 7 12:08:22 server1 kernel: [2767128.531426] vmbr103: port 3(tap108i0) entered disabled state
Aug 7 12:08:22 server1 kernel: [2767128.531588] vmbr103: port 3(tap108i0) entered blocking state
Aug 7 12:08:22 server1 kernel: [2767128.531606] vmbr103: port 3(tap108i0) entered forwarding state
 
Aug 7 12:08:21 server1 qm[71978]: <root@pam> starting task UPID:TES-HS03:0001192F:107E79E0:5B694565:qmstart:108:root@pam:
Aug 7 12:08:21 server1 qm[71983]: start VM 108: UPID:TES-HS03:0001192F:107E79E0:5B694565:qmstart:108:root@pam:
Aug 7 12:08:21 server1 systemd[1]: Started 108.scope.
Starting the VM.

Aug 7 12:08:21 server1 systemd-udevd[72021]: Could not generate persistent MAC address for tap108i0: No such file or directory
Aug 7 12:08:22 server1 kernel: [2767128.520204] device tap108i0 entered promiscuous mode
Aug 7 12:08:22 server1 kernel: [2767128.531425] vmbr103: port 3(tap108i0) entered blocking state
Aug 7 12:08:22 server1 kernel: [2767128.531426] vmbr103: port 3(tap108i0) entered disabled state
Aug 7 12:08:22 server1 kernel: [2767128.531588] vmbr103: port 3(tap108i0) entered blocking state
Aug 7 12:08:22 server1 kernel: [2767128.531606] vmbr103: port 3(tap108i0) entered forwarding state
New interface was added, on VM start. It is set into promiscuous mode.

The lines will show up on all servers. What's the difference between the VMs (config, setup)?
 
  • Like
Reactions: Rais Ahmed
  • Like
Reactions: Rais Ahmed
i have 3 Host cluster, in which only 1 VMs is triggering this events.
the link you have provided is showing following resolutoin, but i am scared to do this reason is i have no idea what will be the outcome of this after this. these hosts are in production.

Resolution
Disable Large Receive Offload (LRO) and/or Generic Recieve Offload (GRO).

This can be done during runtime with the following commands:

Raw
# ethtool -k ethX
# ethtool -K ethX lro off
# ethtool -K ethX gro off

You may persist these settings across reboot by writing /sbin/ifup-local as described here.

Note: It is incorrect to enable LRO when IP forwarding and/or bridging are in use.
 
Last edited:
Depending on the workload, the network throughput might be reduced.
Large Receive Offload (LRO) [1] and/or Generic Receive Offload (GRO) [1]
[1] https://en.wikipedia.org/wiki/Large_receive_offload
Some cards or drivers may also not support LRO/GRO.

You should also update PVE, Kernel 4.10.17-3-pve. This is an old kernel, newest is 4.15.18.

Please post the vmid.conf of the VM in question.
 
  • Like
Reactions: Rais Ahmed
Thanks for your reply,is there any option that only affect that particular vm rather than host server. what will be the impact of update cluster nodes as you said. does this update need to restart nodes after update?

Here is the vmid.conf file

bootdisk: virtio0
cores: 2
cpu: host
ide2: none,media=cdrom
memory: 4096
name: vm108
net0: virtio=7A:52:D7:BA:D3:E4,bridge=vmbr103,queues=4
numa: 1
onboot: 1
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=5edcf4e3-587e-4a8d-b8b0-a179b6a9ec3c
sockets: 2
virtio0: cluster1-osdata:vm-108-disk-1,size=40G
 
Thanks for your reply,is there any option that only affect that particular vm rather than host server. what will be the impact of update cluster nodes as you said. does this update need to restart nodes after update?
The host complains about wrong offloading packets, so the setting has to be made on the host. The ethtool will disable the features on the NIC, there should be no reboot needed.

net0: virtio=7A:52:D7:BA:D3:E4,bridge=vmbr103,queues=4
Disable 'queues' on the NIC of the VM, it could be triggered by this setting. The setup of multiqueue needs also extra handling inside the VM, check the link for more information.
https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_network_device
 
  • Like
Reactions: Rais Ahmed

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!