VM doesn't start Proxmox 6 - timeout waiting on systemd

I have the same issues in proxmox 6.1-3

Since pve-kernel 5.0.15-18 my firewall opnsense vm under freebsd freezes/hang randomly and don't responde every 1 or 2 days.

It become impossible to access the console or shutdown the vm, except stop it.
At this stat only the memory still be used by de guest vm and cpu is 0% use.

Code:
#qm shutdown vmid
VM quit/powerdown failed

#qm stop vmid
VM quit/powerdown failed - terminating now with SIGTERM
VM still running - terminating now with SIGKILL

Once you stop it, you can not restart it unless you clone it or reboot the host.
Code:
#qm start vmid
timeout waiting on systemd

The above fix have sometimes work

## Creating line
Code:
options vhost_net  experimental_zcopytx=0
## in file
Code:
/etc/modprobe.d/vhost-net.conf
## and typing

Code:
update-initramfs -u
update-grup
reboot

But since the i update to pve-kernel 5.3.10-1, it still freeze again every 4 hours and the fix don't work anymore.

All other linux vm are not impacted.

Is there an issue with freebsd os like (pfsense, opensense, freenas, .... ) on proxmox ?

#pve-version

Code:
proxmox-ve: 6.1-2 (running kernel: 5.3.10-1-pve)
pve-manager: 6.1-3 (running version: 6.1-3/37248ce6)
pve-kernel-5.3: 6.0-12
pve-kernel-helper: 6.0-12
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-4-pve: 5.0.21-9
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-2
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-14
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191002-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-2
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2

Here is what i have in syslog when the freeze happen. The vmid 101 and 102 are two debian 8.3 vm.

Code:
Dec  8 13:04:18 proxmox kernel: [85040.060348] usercopy: Kernel memory exposure attempt detected from SLUB object 'eventpoll_pwq(1505:pve-container@101.service)' (offset 37, size 80)!
Dec  8 13:04:18 proxmox kernel: [85040.060378] kernel BUG at mm/usercopy.c:102!
Dec  8 13:04:18 proxmox kernel: [85040.060400] CPU: 1 PID: 20720 Comm: vhost-20635 Tainted: P        W  O      5.3.10-1-pve #1
Dec  8 13:04:18 proxmox kernel: [85040.060429] Code: 0f 45 c6 51 48 89 f9 48 c7 c2 e5 e2 72 bc 41 52 48 c7 c6 26 c2 71 bc 48 c7 c7 b0 e3 72 bc 48 0f 45 f2 48 89 c2 e8 8d 67 e5 ff <0f> 0b 4d 89 e0 31 c9 44 89 ea 31 f6 48 c7 c7 19 e3 72 bc e8 6e ff
Dec  8 13:04:18 proxmox kernel: [85040.060452] RDX: 0000000000000000 RSI: ffff95c533296448 RDI: ffff95c533296448
Dec  8 13:04:18 proxmox kernel: [85040.060469] R13: 0000000000000001 R14: ffff95c4e84d8075 R15: 0000000000000000
Dec  8 13:04:18 proxmox kernel: [85040.060487] CR2: 00007f7c4ac45488 CR3: 00000003285cc000 CR4: 00000000000406e0
Dec  8 13:04:18 proxmox kernel: [85040.060514]  __check_object_size+0x16b/0x17c
Dec  8 13:04:18 proxmox kernel: [85040.060537]  ? skb_kill_datagram+0x70/0x70
Dec  8 13:04:18 proxmox kernel: [85040.060559]  tun_recvmsg+0x76/0x110
Dec  8 13:04:18 proxmox kernel: [85040.060594]  vhost_worker+0xba/0x110 [vhost]
Dec  8 13:04:18 proxmox kernel: [85040.060615]  ? __kthread_parkme+0x70/0x70
Dec  8 13:04:18 proxmox kernel: [85040.060629] Modules linked in: tcp_diag inet_diag veth ebtable_filter ebtables ip6table_raw ip6t_REJECT nf_reject_ipv6 ip6table_filter ip6_tables iptable_raw xt_mac ipt_REJECT nf_reject_ipv4 xt_mark xt_NFLOG xt_limit xt_set xt_physdev xt_addrtype xt_comment xt_multiport xt_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_tcpudp ip_set_hash_net ip_set arc4 md4 cmac nls_utf8 cifs ccm fscache iptable_filter bpfilter softdog nfnetlink_log nfnetlink amdgpu chash amd_iommu_v2 gpu_sched amd_freq_sensitivity edac_mce_amd kvm_amd ccp kvm irqbypass radeon ttm crct10dif_pclmul crc32_pclmul cypress_m8 drm_kms_helper ghash_clmulni_intel drm usbserial aesni_intel fb_sys_fops syscopyarea sysfillrect k10temp aes_x86_64 sysimgblt crypto_simd cryptd glue_helper fam15h_power mac_hid pcspkr zfs(PO) zunicode(PO) zlua(PO) zcommon(PO) znvpair(PO) zavl(PO) icp(PO) spl(O) vhost_net vhost tap ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp sunrpc libiscsi_tcp libiscsi scsi_transport_iscsi ip_tables x_tables
Dec  8 13:04:18 proxmox kernel: [85040.060766] ---[ end trace 846f4459fd1deb5d ]---
Dec  8 13:04:18 proxmox kernel: [85040.060779] Code: 0f 45 c6 51 48 89 f9 48 c7 c2 e5 e2 72 bc 41 52 48 c7 c6 26 c2 71 bc 48 c7 c7 b0 e3 72 bc 48 0f 45 f2 48 89 c2 e8 8d 67 e5 ff <0f> 0b 4d 89 e0 31 c9 44 89 ea 31 f6 48 c7 c7 19 e3 72 bc e8 6e ff
Dec  8 13:04:18 proxmox kernel: [85040.060801] RDX: 0000000000000000 RSI: ffff95c533296448 RDI: ffff95c533296448
Dec  8 13:04:18 proxmox kernel: [85040.060818] R13: 0000000000000001 R14: ffff95c4e84d8075 R15: 0000000000000000
Dec  8 13:04:18 proxmox kernel: [85040.060836] CR2: 00007f7c4ac45488 CR3: 00000003285cc000 CR4: 00000000000406e0


Same kernel bug at other time

Code:
Dec  8 21:24:02 proxmox systemd[1]: Started Proxmox VE replication runner.
Dec  8 21:25:00 proxmox systemd[1]: Starting Proxmox VE replication runner...
Dec  8 21:25:02 proxmox systemd[1]: pvesr.service: Succeeded.
Dec  8 21:25:02 proxmox systemd[1]: Started Proxmox VE replication runner.
Dec  8 21:25:37 proxmox kernel: [24904.860975] usercopy: Kernel memory exposure attempt detec                                                                                                ted from SLUB object 'anon_vma_chain(990:pve-firewall.service)' (offset 22, size 95)!
Dec  8 21:25:37 proxmox kernel: [24904.861004] kernel BUG at mm/usercopy.c:99!
Dec  8 21:25:37 proxmox kernel: [24904.861018] invalid opcode: 0000 [#1] SMP NOPTI
Dec  8 21:25:37 proxmox kernel: [24904.861033] Hardware name: To Be Filled By O.E.M. To Be Fi                                                                                                lled By O.E.M./AM1B-ITX, BIOS P1.60 04/29/2015
Dec  8 21:25:37 proxmox kernel: [24904.861056] Code: 0f 45 c6 51 48 89 f9 48 c7 c2 d5 c2 56 9                                                                                                8 41 52 48 c7 c6 7d 94 55 98 48 c7 c7 a0 c3 56 98 48 0f 45 f2 48 89 c2 e8 1d 7e e4 ff <0f> 0b                                                                                                 4d 89 e0 31 c9 44 89 ea 31 f6 48 c7 c7 09 c3 56 98 e8 6e ff
Dec  8 21:25:37 proxmox kernel: [24904.861073] RAX: 0000000000000083 RBX: ffff92eafc4f0016 RC                                                                                                X: 0000000000000000
Dec  8 21:25:37 proxmox kernel: [24904.861084] RBP: ffffb2464731bb60 R08: 000000000000061e R0                                                                                                9: 0720072007200720
Dec  8 21:25:37 proxmox kernel: [24904.861095] R13: 0000000000000001 R14: ffff92eafc4f0075 R1                                                                                                5: 0000000000000000
Dec  8 21:25:37 proxmox kernel: [24904.861109] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050                                                                                                033
Dec  8 21:25:37 proxmox kernel: [24904.861119] Call Trace:
Dec  8 21:25:37 proxmox kernel: [24904.861139]  __check_object_size+0x16b/0x17c
Dec  8 21:25:37 proxmox kernel: [24904.861160]  ? skb_kill_datagram+0x70/0x70
Dec  8 21:25:37 proxmox kernel: [24904.861175]  tun_do_read+0x4e2/0x6d0
Dec  8 21:25:37 proxmox kernel: [24904.861202]  handle_rx+0x5d4/0xa20 [vhost_net]
Dec  8 21:25:37 proxmox kernel: [24904.861232]  kthread+0x120/0x140
Dec  8 21:25:37 proxmox kernel: [24904.861255]  ret_from_fork+0x22/0x40
Dec  8 21:25:37 proxmox kernel: [24904.861405] ---[ end trace 0990a558043d7237 ]---
Dec  8 21:25:37 proxmox kernel: [24904.861429] RSP: 0018:ffffb2464731bb48 EFLAGS: 00010246
Dec  8 21:25:37 proxmox kernel: [24904.861446] RBP: ffffb2464731bb60 R08: 000000000000061e R0                                                                                                9: 0720072007200720
Dec  8 21:25:37 proxmox kernel: [24904.861463] FS:  0000000000000000(0000) GS:ffff92edb338000                                                                                                0(0000) knlGS:0000000000000000
Dec  8 21:26:00 proxmox systemd[1]: Starting Proxmox VE replication runner...
Dec  8 21:26:02 proxmox systemd[1]: pvesr.service: Succeeded.
Dec  8 21:26:02 proxmox systemd[1]: Started Proxmox VE replication runner.
Dec  8 21:27:00 proxmox systemd[1]: Starting Proxmox VE replication runner...
Dec  8 21:27:02 proxmox systemd[1]: pvesr.service: Succeeded.
 
Last edited:
Hello,

We have the same issue here with following version on a freshly installed 10 nodes cluster with >100To of CEPH storage in production:

Same symptoms than @Asr :
It become impossible to access the console or shutdown the vm, except stop it.
At this stat only the memory still be used by de guest vm and cpu is 0% use.

Code:
# pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve)
pve-manager: 6.1-5 (running version: 6.1-5/9bf06119)
pve-kernel-5.3: 6.1-1
pve-kernel-helper: 6.1-1
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph: 14.2.6-pve1
ceph-fuse: 14.2.6-pve1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 1.2.8-1+pve4
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-2
pve-cluster: 6.1-2
pve-container: 3.0-16
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-4
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
 
I've waited for 10mn before to test a new start of the VM, a restore finished in this time interval and then after that the VM starts like a charm...

Is there a relation between restoring process and starting process ?

Strange intermittent issue even with 5.3 kernel.

@t.lamprecht is there intensive investigation on this criticaly blocking issue in Promox's Teams ?
 
If you read around page 1 and 2, you'll find the answer to get the VMs to start.

Hello @stevensedory ,

I've reread twice pages 1 and 2, and I didn't find somebody that says to have the same issue even in:
  • pve-kernel-5.3
  • "experimental_zcopytx=0"
@t.lamprecht Is there somebody who's reports this issue in these previously defined conditions ?

EDIT:
We uses proxmox integrated CEPH storage.
Intermittent connectivity micro disrutions could create a guest system OS device I/O lock and creates the same symptoms than kernel experimental_zcopytx option bug ?
 
Last edited:
I've waited for 10mn before to test a new start of the VM, a restore finished in this time interval and then after that the VM starts like a charm...

Is there a relation between restoring process and starting process ?

Strange intermittent issue even with 5.3 kernel.

@t.lamprecht is there intensive investigation on this criticaly blocking issue in Promox's Teams ?

Hello,

It's a random freeze issue for me and only FreeBsd Os are impacted.
A known mm/usercopy kernel panic bug freezing vm.
It still only happen on pve 6 kernel 5.3.13-3 and not on pve 5.4 kernel 4.15.18-24

Code:
Feb  9 18:02:00 proxmox systemd[1]: pvesr.service: Succeeded.
Feb  9 18:02:00 proxmox systemd[1]: Started Proxmox VE replication runner.
Feb  9 18:02:08 proxmox kernel: [66227.160131] usercopy: Kernel memory exposure attempt detected from SLUB object 'kernfs_node_cache' (offset 79, size 262)!
Feb  9 18:02:08 proxmox kernel: [66227.171810] kernel BUG at mm/usercopy.c:99!
Feb  9 18:02:08 proxmox kernel: [66227.183452] invalid opcode: 0000 [#1] SMP PTI
Feb  9 18:02:09 proxmox kernel: [66227.218761] RIP: 0010:usercopy_abort+0x7a/0x7c
Feb  9 18:02:09 proxmox kernel: [66227.278765] RDX: 0000000000000000 RSI: ffff90ee4fa97448 RDI: ffff90ee4fa97448
Feb  9 18:02:09 proxmox kernel: [66227.314155] R13: 0000000000000001 R14: ffff90ea96ca82ed R15: 0000000000000000
Feb  9 18:02:09 proxmox kernel: [66227.347914] CR2: 00000346b7d5c000 CR3: 00000003b0c36005 CR4: 00000000001626e0
Feb  9 18:02:09 proxmox kernel: [66227.380145]  __check_object_size+0x16b/0x17c
Feb  9 18:02:09 proxmox kernel: [66227.410440]  ? skb_kill_datagram+0x70/0x70
Feb  9 18:02:09 proxmox kernel: [66227.438803]  tun_recvmsg+0x76/0x110
Feb  9 18:02:09 proxmox kernel: [66227.465199]  vhost_worker+0xba/0x110 [vhost]
Feb  9 18:02:09 proxmox kernel: [66227.489680]  ? __kthread_parkme+0x70/0x70
Feb  9 18:02:09 proxmox kernel: [66227.504960]  video
Feb  9 18:02:09 proxmox kernel: [66227.707172] RBP: ffffa3b10890fb60 R08: 000000000000086e R09: 00000000ad55ad55
Feb  9 18:02:09 proxmox kernel: [66227.715421] R10: 0000000000000000 R11: 0000000000000002 R12: 0000000000000106
Feb  9 18:02:09 proxmox kernel: [66227.723594] R13: 0000000000000001 R14: ffff90ea96ca82ed R15: 0000000000000000
Feb  9 18:02:09 proxmox kernel: [66227.731597] FS:  0000000000000000(0000) GS:ffff90ee4fa80000(0000) knlGS:0000000000000000
Feb  9 18:02:09 proxmox kernel: [66227.739537] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Feb  9 18:03:00 proxmox systemd[1]: Starting Proxmox VE replication runner...
Feb  9 18:03:00 proxmox systemd[1]: pvesr.service: Succeeded.

I downgrad on pve 5.4
 
Finally, after CEPH dedicated network has been fixed, no new network disruption has been observed and the "VM locking" don't occurs too, we monitor specifically this cluster.

Wait and see.
 
  • Like
Reactions: t.lamprecht
Hi,

I started to have that issue yesterday when I tried to enable an external OpenVPN connexion in my pfSense VM.
As soon as there is too much bandwidth going through the VPN, the VM freeze and I'm unable to restart it.
I had to Restart my Promox server 4 time since yesterday.
I disabled the VPN for now.

I did the latest upgrade, same problem.

Using linux-bridge
Not in a cluster and not using CEPH

Code:
sarge@pve2 in ~$ pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.18-3-pve)
pve-manager: 6.1-8 (running version: 6.1-8/806edfe1)
pve-kernel-helper: 6.1-8
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-4.15: 5.4-11
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.13-2-pve: 5.3.13-2
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-4.15.18-23-pve: 4.15.18-51
pve-kernel-4.15.18-10-pve: 4.15.18-32
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-5
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 4.0.1-pve1
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-23
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-7
pve-ha-manager: 3.0-9
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-7
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1


Trying to stop and start the VM:
Code:
sarge@pve2 in ~$ sudo qm shutdown 100
VM quit/powerdown failed

sarge@pve2 in ~$ sudo qm stop 100
VM quit/powerdown failed - terminating now with SIGTERM
VM still running - terminating now with SIGKILL

sarge@pve2 in ~$ sudo systemctl status qemu.slice
? qemu.slice
   Loaded: loaded
   Active: active since Thu 2020-04-23 13:58:00 EDT; 18h ago
    Tasks: 5
   Memory: 2.0G
   CGroup: /qemu.slice
           +-100.scope
             +-6020 [kvm]

sarge@pve2 in ~$ ps aux | grep 6020
root      6020 26.3  0.0      0     0 ?        Zl   Apr23 286:03 [kvm] <defunct>
root      6044  4.0  0.0      0     0 ?        S    Apr23  43:39 [vhost-6020]
root      6851  0.0  0.0      0     0 ?        S    Apr23   0:00 [vhost-6020]
root      6856  0.0  0.0      0     0 ?        S    Apr23   0:39 [kvm-pit/6020]

sarge@pve2 in ~$ sudo qm start 100
timeout waiting on systemd

Trace:
Code:
[54139.730555] ------------[ cut here ]------------
[54139.730556] kernel BUG at mm/usercopy.c:99!
[54139.730942] invalid opcode: 0000 [#1] SMP PTI
[54139.731321] CPU: 11 PID: 6791 Comm: vhost-6020 Tainted: P        W IO      5.3.18-3-pve #1
[54139.731807] Hardware name: Dell Inc. PowerEdge R610/0NCY41, BIOS 6.6.0 05/22/2018
[54139.732164] RIP: 0010:usercopy_abort+0x7a/0x7c
[54139.732539] Code: 0f 45 c6 51 48 89 f9 48 c7 c2 a5 be 56 99 41 52 48 c7 c6 bd 90 55 99 48 c7 c7 70 bf 56 99 48 0f 45 f2 48 89 c2 e8 3d 78 e4 ff <0f> 0b 4d 89 e0 31 c9 44 89 ea 31 f6 48 c7 c7 d9 be 56 99 e8 6e ff
[54139.733305] RSP: 0018:ffffaa230c4d3b48 EFLAGS: 00010246
[54139.733717] RAX: 000000000000006c RBX: ffff8901963a0131 RCX: 0000000000000000
[54139.734103] RDX: 0000000000000000 RSI: ffff88f88f957448 RDI: ffff88f88f957448
[54139.734498] RBP: ffffaa230c4d3b60 R08: 00000000000006f4 R09: 00000000ffffffff
[54139.734915] R10: 0000000000000000 R11: ffff8904857ea060 R12: 0000000000000396
[54139.735333] R13: 0000000000000001 R14: ffff8901963a04c7 R15: 0000000000000000
[54139.735724] FS:  0000000000000000(0000) GS:ffff88f88f940000(0000) knlGS:0000000000000000
[54139.736122] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[54139.736589] CR2: 00007f6472f1a080 CR3: 00000017e4eb0003 CR4: 00000000000226e0
[54139.737016] Call Trace:
[54139.737441]  __check_heap_object+0xdf/0x110
[54139.737947]  __check_object_size+0x16b/0x17c
[54139.738447]  simple_copy_to_iter+0x2a/0x50
[54139.738927]  __skb_datagram_iter+0x1b6/0x2c0
[54139.739390]  ? skb_kill_datagram+0x70/0x70
[54139.740059]  skb_copy_datagram_iter+0x40/0x90
[54139.740826]  tun_do_read+0x4e2/0x6d0
[54139.741587]  tun_recvmsg+0x76/0x110
[54139.742359]  handle_rx+0x5d4/0xa20 [vhost_net]
[54139.743100]  handle_rx_net+0x15/0x20 [vhost_net]
[54139.743865]  vhost_worker+0xba/0x110 [vhost]
[54139.744625]  kthread+0x120/0x140
[54139.745386]  ? log_used.part.44+0x20/0x20 [vhost]
[54139.746155]  ? __kthread_parkme+0x70/0x70
[54139.746931]  ret_from_fork+0x35/0x40
[54139.747705] Modules linked in: binfmt_misc tcp_diag inet_diag md4 cmac nls_utf8 cifs libarc4 nfsv3 nfs_acl rpcsec_gss_krb5 auth_rpcgss nfsv4 nfs lockd grace fscache veth ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter bpfilter 8021q garp mrp softdog nfnetlink_log nfnetlink rc_hauppauge em28xx_rc rc_core si2157 lgdt3306a i2c_mux em28xx_dvb dvb_core intel_powerclamp kvm_intel kvm irqbypass zfs(PO) crct10dif_pclmul zunicode(PO) crc32_pclmul mgag200 zlua(PO) ghash_clmulni_intel drm_vram_helper zavl(PO) icp(PO) ttm ipmi_ssif aesni_intel drm_kms_helper em28xx zcommon(PO) aes_x86_64 drm tveeprom znvpair(PO) crypto_simd v4l2_common cryptd i2c_algo_bit spl(O) videodev glue_helper fb_sys_fops syscopyarea sysfillrect dcdbas sysimgblt intel_cstate mc joydev input_leds vhost_net serio_raw pcspkr ipmi_si vhost ipmi_devintf tap i7core_edac ib_iser ipmi_msghandler acpi_power_meter mac_hid rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi
[54139.747751]  scsi_transport_iscsi nf_conntrack_ftp nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 coretemp sunrpc ip_tables x_tables autofs4 btrfs xor zstd_compress raid6_pq dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio libcrc32c uas usb_storage ses enclosure scsi_transport_sas usbkbd hid_generic usbmouse gpio_ich csiostor usbhid psmouse lpc_ich pata_acpi hid megaraid_sas scsi_transport_fc cxgb4 bnx2 wmi
[54139.757708] ---[ end trace ece66be8253077eb ]---
 
Did you follow all the suggestions earlier in this thread? I haven't had any issues since I reported things were fixed a while back.
 
The only thing I found in the previous pages was to upgrade to the 5.3 kernel, which I have already done.
 
Hi,

No, it was failed when I got up this morning.

I made the change and rebooted my host at 12:20 PM yesterday.
I started the OpenVPN connexion at 06:30 PM yesterday.
The VM froze around 1:20 AM this morning.

I wasn't sure about the "pve-efiboot-tool refresh" part but the "update-initramfs -u" told me it didn't had a pve-efiboot-uuid so I didn't ran it.

Code:
sarge@pve2 in ~$ sudo update-initramfs -u
update-initramfs: Generating /boot/initrd.img-5.3.18-3-pve
Running hook script 'zz-pve-efiboot'..
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..
No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync.

Is there a way to confirm that the zerocopy is really disabled?
 
Last edited:
i can confirm, this bug is still active.
ZFS install with FreeBSD guest - random crashes. They're more frequent if the freebsd guest is under load.
Disabling zerocopy did not fix this problem. Newest kernel installed.
 
Is there a way to confirm that the zerocopy is really disabled?

I found how to check:

Code:
sarge@pve2 in ~$ cat /sys/module/vhost_net/parameters/experimental_zcopytx
0

Since there is no fix yet for my pfsense VM (FreeBSD), yesterday I decided to install OpenWRT (GNU/Linux) in a VM and use it with WireGuard as a VPN gateway. I finished to configure it last night at midnight.

This morning when I woke up, the VM was frozen. Tried to stop and restart it, same error as for the FreeBSD one, Timeout waiting on systemd.
I had to restart my Proxmox host.
 
i can confirm, this bug is still active.
ZFS install with FreeBSD guest - random crashes. They're more frequent if the freebsd guest is under load.
Disabling zerocopy did not fix this problem. Newest kernel installed.

The only way i found to avoid the bug is to passthrough the nic in Freebsd guest on pve 6.1 kernel 5.3.18.
 
Last edited:
I just moved to 6.2.4 and can confirm this is an issue. I'm seeing it on a debian based VM.

root@pve-3:~# cat /sys/module/vhost_net/parameters/experimental_zcopytx
0
root@pve-3:~# free
total used free shared buff/cache available
Mem: 65945024 39466420 20929152 134412 5549452 25724236
Swap: 8388604 0 8388604
root@pve-3:~#



Code:
Virtual Environment 6.2-4
Node 'pve-3'
CPU usage
 
18.21% of 12 CPU(s)
    
IO delay
 
0.27%
Load average
 
5.39,5.53,5.26
RAM usage
 
60.19% (37.85 GiB of 62.89 GiB)
    
KSM sharing
 
6.56 GiB
HD space(root)
 
3.09% (2.90 GiB of 93.99 GiB)
    
SWAP usage
 
0.00% (0 B of 8.00 GiB)
CPU(s)
 
12 x AMD Opteron(tm) Processor 6344 (1 Socket)
Kernel Version
 
Linux 5.4.41-1-pve #1 SMP PVE 5.4.41-1 (Fri, 15 May 2020 15:06:08 +0200)
PVE Manager Version
 
pve-manager/6.2-4/9824574a
Logs
()
proxmox-ve: 6.2-1 (running kernel: 5.4.41-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-2
pve-kernel-helper: 6.2-2
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-6
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!