VM freezes irregularly

Dunuin

Famous Member
Jun 30, 2020
8,025
1,995
149
Germany
Add the following to crontab - e

Code:
@reboot echo "powersave" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
But also keep in mind that "powersave" means the CPU will always run at the minimum possible clock so you are wasting most of the nodes performance. I would use "schedutil" instead, that way the node will clock idling cores on the lowest clock and still clock up cores that got load. That way cores are dynamically clocking between min and max clock depending on the workload.
 
Last edited:
  • Like
Reactions: Neobin

gyrex

Member
Jul 19, 2022
86
17
8
But also keep in mind that "powersave" means the CPU will always run at the munimum possible clock soo you areswasting most of the nodes performance. I would use "schedutil" instead, that way the node will clock idleing cores on lowest clock and still clock up cores that got load. That way cores are dynamically clocking between mun and max clock depending on the workload.
Numerous benchmarks have been performed and they've shown a minimal performance impact ~5% and only at high loads.
 

Dunuin

Famous Member
Jun 30, 2020
8,025
1,995
149
Germany
What benchmarks did you run?
Here I see a massive performance impact when 16 cores are running only with 1200MHz with "powersave" instead of 3000Mhz only with "performance". A little performance hit I only get with "schedutil" where cores will vary between 1200 and 3000MHz depending on load.
 

gyrex

Member
Jul 19, 2022
86
17
8
My pfSense VM froze overnight. Does anyone in this thread happen to run pfsense as well? Have you seen it hang/freeze? There was nothing logged to console, just a dead VM which I had to hard reset.

What benchmarks did you run?
I think we're a bit off-topic on this thread.
 

Labersin

New Member
Aug 3, 2022
8
1
3
I've also been struggling with vm freezes, lxc containers and the host seem to survive quite happily. I've thrown a spare desk fan pointed directly at it and it has been much more stable since (freezes were happening within an hour to 3 consistently) its been 2 days since last freeze. its easy enough to test.
 

gyrex

Member
Jul 19, 2022
86
17
8
I've also been struggling with vm freezes, lxc containers and the host seem to survive quite happily. I've thrown a spare desk fan pointed directly at it and it has been much more stable since (freezes were happening within an hour to 3 consistently) its been 2 days since last freeze. its easy enough to test.

After setting my CPU governors to powersave mode, the CPUs are running at a constant 55C so I'm not so sure it's a heat issue?

If these freezes continue to happen, I might try running VMware ESXi and see if I get more stability using that. This really smells like some sort of a kernel issue which is specifically affecting the N5105 CPU architecture rather than a temperature one...
 

Labersin

New Member
Aug 3, 2022
8
1
3
After setting my CPU governors to powersave mode, the CPUs are running at a constant 55C so I'm not so sure it's a heat issue?

If these freezes continue to happen, I might try running VMware ESXi and see if I get more stability using that. This really smells like some sort of a kernel issue which is specifically affecting the N5105 CPU architecture rather than a temperature one...
perhaps. in my specific situation I tried running newer and older kernels in different distros than the base proxmox one, and they all had freezing issues. i've tried debian and ubuntu, both newer and older (not an extensive list by any means) but the base proxmox is stable, no freezing there. also lxc containers for me have been stable.

all in all, throwing a fan at the top of the unit was a very simple thing to test and my cpu package temps are down to 35-40 according to sensors.
 

gyrex

Member
Jul 19, 2022
86
17
8
perhaps. in my specific situation I tried running newer and older kernels in different distros than the base proxmox one, and they all had freezing issues. i've tried debian and ubuntu, both newer and older (not an extensive list by any means) but the base proxmox is stable, no freezing there. also lxc containers for me have been stable.

all in all, throwing a fan at the top of the unit was a very simple thing to test and my cpu package temps are down to 35-40 according to sensors.
Since throwing a fan on have any of your VMs frozen at all or they're just freezing less than before?

If my VMs fail again, I'll try ESXi and I'll report back on my system stability.
 

Labersin

New Member
Aug 3, 2022
8
1
3
Since throwing a fan on have any of your VMs frozen at all or they're just freezing less than before?

If my VMs fail again, I'll try ESXi and I'll report back on my system stability.
its been about 2 days and i've had no freezes. When i tried changing my cpu type to "host" it was stable for about a day which was the longest I had stable VM's before adding cooling. its about 2 days since I added the fan and no freeze yet.
 
  • Like
Reactions: gyrex

Labersin

New Member
Aug 3, 2022
8
1
3
another 3 days of no freezing with the additional cooling. Now I need to pull it apart and see if there is some reason why its overheating so badly and at the very least, use better thermal transfer goop.
 

gyrex

Member
Jul 19, 2022
86
17
8
another 3 days of no freezing with the additional cooling. Now I need to pull it apart and see if there is some reason why its overheating so badly and at the very least, use better thermal transfer goop.
My VMs haven't frozen for 5 days. No fan but Proxmox running with powersave governor and temps at or around 55C, occasionally hitting 100C. Strange. Still running remote logging and verbose kernel logging.
 

Holger Huo

New Member
Aug 8, 2022
15
2
3
My VMs haven't frozen for 5 days. No fan but Proxmox running with powersave governor and temps at or around 55C, occasionally hitting 100C. Strange. Still running remote logging and verbose kernel logging.
Hi! May I know the kernel you are using? I can kinda narrow the problem down to kernel version ranging between 5.10.127 to 5.14.
(The former one running latest OpenWRT and the latter one Alpine 3.16 and AlmaLinux 9)
The AlmaLinux 9 on my VM shows similar symptoms which is that VM hangs after a few hours with full CPU and frozen console, however, the AlpineLinux has a little bit different behavior which is that the Golang program running in it crashes with Illegal instruction error (most likely CPU related), and during that period, the VM also freezes (But on Alpine, it can 'unfreeze' itself without even rebooting)

I'm not sure if it is virtualization related as the host PVE 7.2 runs kernel ver. 5.15 and I don't have a spare host system for testing. But I've installed kernel 5.15 on AlmaLinux 8 (prev. 4.18.0) for testing and I'll share with the results soon.
 

Holger Huo

New Member
Aug 8, 2022
15
2
3
Hi! May I know the kernel you are using? I can kinda narrow the problem down to kernel version ranging between 5.10.127 to 5.14.
(The former one running latest OpenWRT and the latter one Alpine 3.16 and AlmaLinux 9)
The AlmaLinux 9 on my VM shows similar symptoms which is that VM hangs after a few hours with full CPU and frozen console, however, the AlpineLinux has a little bit different behavior which is that the Golang program running in it crashes with Illegal instruction error (most likely CPU related), and during that period, the VM also freezes (But on Alpine, it can 'unfreeze' itself without even rebooting)

I'm not sure if it is virtualization related as the host PVE 7.2 runs kernel ver. 5.15 and I don't have a spare host system for testing. But I've installed kernel 5.15 on AlmaLinux 8 (prev. 4.18.0) for testing and I'll share with the results soon.
I guess I have to go all the way up to kernel 5.19.0 as 5.15 is no longer provided in ELREPO
 

gyrex

Member
Jul 19, 2022
86
17
8
Hi! May I know the kernel you are using?
I'm running 2 VMs on my Proxmox server, pfSense and Ubuntu 22.04 running docker. Both have locked up at various points although not for the past 5 or so days - as usual with Murphy's law, they haven't locked up since running their kernels in verbose mode and running remote logging services in order to try and diagnose the freezes/lockups.

Kernel versions below:

Ubuntu: Linux 5.15.0-43-generic #46-Ubuntu SMP Tue Jul 12 10:30:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
pfSense/FreeBSD: FreeBSD 12.3-STABLE FreeBSD 12.3-STABLE RELENG_2_6_0-n226742-1285d6d205f pfSense amd64

I'm not sure if it is virtualization related as the host PVE 7.2 runs kernel ver. 5.15

I also wonder if this has some impact on the VMs running under it. It makes sense that this could potentially be an issue.
 

Labersin

New Member
Aug 3, 2022
8
1
3
Gyrex, I'm curious, I got my 5105 box of Aliexpress, did you (or from somewhere similar like Banggood etc)? If so I wonder if those who are posting here are the losers in the CPU bin lottery that low-cost sellers (like those on Ali/Banngood etc) use and we got CPUs that either generate a little more heat, or can handle heat a little less well.

The powersave governor reduces the boost speed which in turn keeps temps lower, as does a fan. Its a good working theory for the time being at least.

As to how different kernels can freeze or not, differing optimisations, patches, more things (or less) being built into them can all affect CPU load differently. My HomeAssistant VM runs 5.15.55 and idles about 10% of one core, my pfSense VM runs FreeBSD 12.3-stable and idles at ~1.5% of a single core. I also ran various versions of Debian and Ubuntu VM's with nothing additional installed, just going through the install clicking next where possible and they idled at the 20-30% of one core. I don't have any running now as I moved that load to LXC containers and their idle loads are now down to .1% above the host idle. All in all, for me, the VM's that locked up were the ones using the most CPU and causing the most heat. and it didn't matter what kernel they were running, although I didn't exhaustively try one VM and work my way up kernel versions with no other changes so my method isn't scientific by any stretch. for all my freezes the CPU usage in PVE summary was significantly above normal freeze. HA froze at about 30% CPU, Deb was often at 50% of 2 cores, or 75% of one. don't remember exactly what ubuntu froze at, but similar to deb I think.
 

rzv

New Member
Aug 1, 2022
11
1
3
Gyrex, I'm curious, I got my 5105 box of Aliexpress, did you (or from somewhere similar like Banggood etc)? If so I wonder if those who are posting here are the losers in the CPU bin lottery that low-cost sellers (like those on Ali/Banngood etc) use and we got CPUs that either generate a little more heat, or can handle heat a little less well.
I have an Intel NUC with the 5105 CPU and it's behaving the exact same way. The cooling is adequate and I think it's safe to assume that Intel didn't use low quality CPUs in their own product.
I am almost sure this is a kernel bug. Since it happens with every VM running different kernels and the host is not affected, I assume something is wrong with KVM.

Edit: For what it may be worth, I tested with 5.10 kernel on proxmox host and I got the same crashes.
 
Last edited:

gyrex

Member
Jul 19, 2022
86
17
8
Gyrex, I'm curious, I got my 5105 box of Aliexpress, did you (or from somewhere similar like Banggood etc)? If so I wonder if those who are posting here are the losers in the CPU bin lottery that low-cost sellers (like those on Ali/Banngood etc) use and we got CPUs that either generate a little more heat, or can handle heat a little less well.

I bought my NUC from Aliexpress (https://www.aliexpress.com/item/1005004302428997.html).

You raise an interesting point about it potentially being the CPU itself and this could definitely be the case. The common denominators between everyone on here is that we're all running the Intel N5105 CPU and we're running Proxmox with various FreeBSD/Linux VMs.

I haven't had a freeze for a week but the 2 freezes I had (pfSense and Ubuntu 22.04 both froze) under very little load is still making me nervous. One way to try and isolate this issue further is to run a completely different kernel and/or OS on the bare metal (VMware ESXi, Windows etc.). I'm happy to be the guinea pig here and move my routing function back to my old router and load a different OS onto the NUC. I'll report back with my findings.
 

gyrex

Member
Jul 19, 2022
86
17
8
Has anyone had a VM freeze in the past week or so and is fully updated on both Proxmox and VMs (apt update && apt dist-upgrade)?

I'm wondering if something has been updated on the host (or VMs) which has potentially fixed the issue...
 

gyrex

Member
Jul 19, 2022
86
17
8
My Ubuntu VM finally froze again today but thankfully I was able to capture the kernel panic via netconsole and included the output below as well as the bug I've filed on Proxmox's bugzilla: https://bugzilla.proxmox.com/show_bug.cgi?id=4188

I'm attaching the output of the log to add as much information to this thread as possible.

Code:
[12361.508193] BUG: kernel NULL pointer dereference, address: 0000000000000000
[12361.509399] #PF: supervisor write access in kernel mode
[12361.510524] #PF: error_code(0x0002) - not-present page
[12361.511847] PGD 0 P4D 0
[12361.513120] Oops: 0002 [#1] SMP PTI
[12361.514392] CPU: 0 PID: 3268 Comm: python3 Not tainted 5.15.0-46-generic #49-Ubuntu
[12361.515796] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.15.0-0-g2dd4b9b3f840-prebuilt.qemu.org 04/01/2014
[12361.518606] RIP: 0010:asm_exc_general_protection+0x4/0x30
[12361.520233] Code: c4 48 8d 6c 24 01 48 89 e7 48 8b 74 24 78 48 c7 44 24 78 ff ff ff ff e8 ea 7f f9 ff e9 05 0b 00 00 0f 1f 44 00 00 0f 1f 00 e8 <c8> 09 00 00 48 89 c4 48 8d 6c 24 01 48 89 e7 48 8b 74 24 78 48 c7
[12361.523251] RSP: 0018:ffffa7498342f010 EFLAGS: 00010046
[12361.524599] RAX: 0000000000000000 RBX: 0000000000000015 RCX: 0000000000000001
[12361.525806] RDX: ffff8fed49a6ed00 RSI: ffff8fed4b178000 RDI: ffff8fec418a9400
[12361.527014] RBP: ffffa7498342f8b0 R08: 0000000000000015 R09: ffff8fed4b1780a8
[12361.527868] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8fed57e4f180
[12361.528754] R13: 0000000000004000 R14: 0000000000000015 R15: 0000000000000001
[12361.529623] FS:  00007f291afb8b30(0000) GS:ffff8fed7bc00000(0000) knlGS:0000000000000000
[12361.530318] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[12361.530941] CR2: 0000000000000000 CR3: 0000000102ad8000 CR4: 00000000000006f0
[12361.531602] Call Trace:
[12361.532257]  <TASK>
[12361.532953]  ? asm_exc_int3+0x40/0x40
[12361.533565]  ? asm_exc_general_protection+0x4/0x30
[12361.534192]  ? asm_exc_int3+0x40/0x40
[12361.534823]  ? asm_exc_general_protection+0x4/0x30
[12361.535450]  ? asm_exc_int3+0x40/0x40
[12361.536063]  ? asm_exc_general_protection+0x4/0x30
[12361.536675]  ? asm_exc_int3+0x40/0x40
[12361.537262]  ? asm_exc_general_protection+0x4/0x30
[12361.537845]  ? asm_exc_int3+0x40/0x40
[12361.538425]  ? asm_exc_general_protection+0x4/0x30
[12361.539015]  ? asm_exc_int3+0x40/0x40
[12361.539630]  ? asm_exc_general_protection+0x4/0x30
[12361.540212]  ? asm_exc_int3+0x40/0x40
[12361.540825]  ? asm_exc_general_protection+0x4/0x30
[12361.541561]  ? asm_exc_int3+0x40/0x40
[12361.542191]  ? asm_exc_general_protection+0x4/0x30
[12361.542761]  ? asm_exc_int3+0x40/0x40
[12361.543325]  ? asm_exc_general_protection+0x4/0x30
[12361.543909]  ? asm_exc_int3+0x40/0x40
[12361.544481]  ? asm_exc_general_protection+0x4/0x30
[12361.545062]  ? asm_exc_int3+0x40/0x40
[12361.545677]  ? asm_exc_general_protection+0x4/0x30
[12361.546270]  ? asm_exc_int3+0x40/0x40
[12361.546861]  ? asm_exc_general_protection+0x4/0x30
[12361.547466]  ? asm_exc_int3+0x40/0x40
[12361.548071]  ? asm_exc_general_protection+0x4/0x30
[12361.548669]  ? asm_exc_int3+0x40/0x40
[12361.549258]  ? asm_exc_general_protection+0x4/0x30
[12361.549844]  ? asm_exc_int3+0x40/0x40
[12361.550425]  ? asm_exc_general_protection+0x4/0x30
[12361.551007]  ? asm_exc_int3+0x40/0x40
[12361.551594]  ? asm_exc_general_protection+0x4/0x30
[12361.552138]  ? asm_exc_int3+0x40/0x40
[12361.552671]  ? asm_exc_general_protection+0x4/0x30
[12361.553201]  ? asm_exc_int3+0x40/0x40
[12361.553737]  ? asm_exc_general_protection+0x4/0x30
[12361.554226]  ? asm_exc_int3+0x40/0x40
[12361.554706]  ? asm_exc_general_protection+0x4/0x30
[12361.555175]  ? asm_exc_int3+0x40/0x40
[12361.555646]  ? asm_exc_general_protection+0x4/0x30
[12361.556093]  ? asm_exc_int3+0x40/0x40
[12361.556549]  ? asm_exc_general_protection+0x4/0x30
[12361.556992]  ? asm_exc_int3+0x40/0x40
[12361.557420]  ? asm_sysvec_spurious_apic_interrupt+0x20/0x20
[12361.557849]  ? schedule_hrtimeout_range_clock+0xa0/0x120
[12361.558272]  ? __fget_files+0x51/0xc0
[12361.558707]  ? __hrtimer_init+0x110/0x110
[12361.559140]  __fget_light+0x32/0x90
[12361.559560]  __fdget+0x13/0x20
[12361.559989]  do_select+0x302/0x850
[12361.560405]  ? __pollwait+0xe0/0xe0
[12361.560820]  ? __pollwait+0xe0/0xe0
[12361.561261]  ? __pollwait+0xe0/0xe0
[12361.561648]  ? __pollwait+0xe0/0xe0
[12361.562028]  ? cpumask_next_and+0x24/0x30
[12361.562443]  ? update_sg_lb_stats+0x78/0x580
[12361.562857]  ? kfree_skbmem+0x81/0xa0
[12361.563266]  ? update_group_capacity+0x2c/0x2d0
[12361.563725]  ? update_sd_lb_stats.constprop.0+0xe0/0x250
[12361.564130]  ? __check_object_size.part.0+0x3a/0x150
[12361.564518]  ? __check_object_size+0x1d/0x30
[12361.564904]  ? core_sys_select+0x246/0x420
[12361.565288]  core_sys_select+0x1dd/0x420
[12361.565684]  ? ktime_get_ts64+0x55/0x100
[12361.566086]  ? _copy_to_user+0x20/0x30
[12361.566495]  ? poll_select_finish+0x121/0x220
[12361.566899]  ? kvm_clock_get_cycles+0x11/0x20
[12361.567313]  kern_select+0xdd/0x180
[12361.567744]  __x64_sys_select+0x21/0x30
[12361.568148]  do_syscall_64+0x5c/0xc0
[12361.568546]  ? __do_softirq+0xd9/0x2e7
[12361.568947]  ? exit_to_user_mode_prepare+0x37/0xb0
[12361.569349]  ? irqentry_exit_to_user_mode+0x9/0x20
[12361.569753]  ? irqentry_exit+0x1d/0x30
[12361.570154]  ? sysvec_apic_timer_interrupt+0x4e/0x90
[12361.570558]  entry_SYSCALL_64_after_hwframe+0x61/0xcb
[12361.570970] RIP: 0033:0x7f292739f4a3
[12361.571394] Code: c3 8b 07 85 c0 75 24 49 89 fb 48 89 f0 48 89 d7 48 89 ce 4c 89 c2 4d 89 ca 4c 8b 44 24 08 4c 8b 4c 24 10 4c 89 5c 24 08 0f 05 <c3> e9 c7 d1 ff ff 41 54 b8 02 00 00 00 49 89 f4 be 00 88 08 00 55
[12361.572283] RSP: 002b:00007f291afaaf68 EFLAGS: 00000246 ORIG_RAX: 0000000000000017
[12361.572752] RAX: ffffffffffffffda RBX: 00007f291afb8b30 RCX: 00007f292739f4a3
[12361.573227] RDX: 00007f291afab090 RSI: 00007f291afab010 RDI: 0000000000000017
[12361.573706] RBP: 00007f291afab010 R08: 00007f291afaafb0 R09: 0000000000000000
[12361.574182] R10: 00007f291afab110 R11: 0000000000000246 R12: 0000000000000017
[12361.574656] R13: 00007f291afab090 R14: 00007f291afab190 R15: 00007f291afaf1a0
[12361.575144]  </TASK>
[12361.575640] Modules linked in: xt_nat xt_tcpudp veth xt_conntrack nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack_netlink nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xfrm_user xfrm_algo nft_counter xt_addrtype nft_compat nf_tables nfnetlink br_netfilter bridge stp llc overlay sch_fq_codel joydev input_leds cp210x serio_raw usbserial cdc_acm qemu_fw_cfg mac_hid dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua efi_pstore pstore_blk mtd ramoops netconsole reed_solomon ipmi_devintf ipmi_msghandler msr pstore_zone ip_tables x_tables autofs4 btrfs blake2b_generic zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear hid_generic bochs drm_vram_helper drm_ttm_helper ttm psmouse drm_kms_helper usbhid syscopyarea sysfillrect virtio_net sysimgblt fb_sys_fops net_failover failover cec hid rc_core virtio_scsi drm i2c_piix4 pata_acpi floppy
[12361.580240] CR2: 0000000000000000
[12361.580896] ---[ end trace 2596706ab1b3b337 ]---
[12361.581518] RIP: 0010:asm_exc_general_protection+0x4/0x30
[12361.582178] Code: c4 48 8d 6c 24 01 48 89 e7 48 8b 74 24 78 48 c7 44 24 78 ff ff ff ff e8 ea 7f f9 ff e9 05 0b 00 00 0f 1f 44 00 00 0f 1f 00 e8 <c8> 09 00 00 48 89 c4 48 8d 6c 24 01 48 89 e7 48 8b 74 24 78 48 c7
[12361.583552] RSP: 0018:ffffa7498342f010 EFLAGS: 00010046
[12361.584323] RAX: 0000000000000000 RBX: 0000000000000015 RCX: 0000000000000001
[12361.585078] RDX: ffff8fed49a6ed00 RSI: ffff8fed4b178000 RDI: ffff8fec418a9400
[12361.585828] RBP: ffffa7498342f8b0 R08: 0000000000000015 R09: ffff8fed4b1780a8
[12361.586563] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8fed57e4f180
[12361.587283] R13: 0000000000004000 R14: 0000000000000015 R15: 0000000000000001
[12361.588012] FS:  00007f291afb8b30(0000) GS:ffff8fed7bc00000(0000) knlGS:0000000000000000
[12361.588742] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[12361.589472] CR2: 0000000000000000 CR3: 0000000102ad8000 CR4: 00000000000006f0
[12394.744918] BUG: kernel NULL pointer dereference, address: 0000000000000045
[12394.745723] #PF: supervisor instruction fetch in kernel mode
[12394.746513] #PF: error_code(0x0010) - not-present page
[12394.747292] PGD 0 P4D 0
[12394.748083] Oops: 0010 [#2] SMP PTI
[12394.748858] CPU: 0 PID: 3950 Comm: mosquitto Tainted: G      D           5.15.0-46-generic #49-Ubuntu
[12394.749639] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.15.0-0-g2dd4b9b3f840-prebuilt.qemu.org 04/01/2014
[12394.751251] RIP: 0010:0x45
[12394.752088] Code: Unable to access opcode bytes at RIP 0x1b.
[12394.752907] RSP: 0018:ffffa74980003648 EFLAGS: 00010046
[12394.753731] RAX: 0000000000000045 RBX: ffff8fed57f082c8 RCX: 00000000000000c3
[12394.754576] RDX: 0000000000000010 RSI: 0000000000000001 RDI: ffffa7498342fa00
[12394.755413] RBP: ffffa74980003690 R08: 00000000000000c3 R09: ffffa749800036a8
[12394.756244] R10: 00000000b140ae3e R11: ffffa74980003730 R12: 0000000000000000
[12394.757091] R13: 0000000000000000 R14: 0000000000000010 R15: 00000000000000c3
[12394.757972] FS:  00007f250ea9ab48(0000) GS:ffff8fed7bc00000(0000) knlGS:0000000000000000
[12394.758803] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[12394.759627] CR2: 0000000000000045 CR3: 0000000026064000 CR4: 00000000000006f0
[12394.760488] Call Trace:
[12394.761303]  <IRQ>
[12394.762148]  ? __wake_up_common+0x7d/0x140
[12394.762979]  __wake_up_common_lock+0x7c/0xc0
[12394.763834]  __wake_up_sync_key+0x20/0x30
[12394.764666]  sock_def_readable+0x3b/0x80
[12394.765471]  tcp_data_ready+0x31/0xe0
[12394.766280]  tcp_data_queue+0x315/0x610
[12394.767028]  tcp_rcv_established+0x25f/0x6d0
[12394.767799]  tcp_v4_do_rcv+0x155/0x260
[12394.768568]  tcp_v4_rcv+0xd9d/0xed0
[12394.769302]  ip_protocol_deliver_rcu+0x3d/0x240
[12394.770033]  ip_local_deliver_finish+0x48/0x60
[12394.770726]  ip_local_deliver+0xfb/0x110
[12394.771387]  ? ip_protocol_deliver_rcu+0x240/0x240
[12394.772059]  ip_rcv_finish+0xbe/0xd0
[12394.772746]  ip_sabotage_in+0x5f/0x70 [br_netfilter]
[12394.773425]  nf_hook_slow+0x44/0xc0
[12394.774105]  ip_rcv+0x8a/0x190
[12394.774731]  ? ip_sublist_rcv+0x200/0x200
[12394.775349]  __netif_receive_skb_one_core+0x8a/0xa0
[12394.775959]  __netif_receive_skb+0x15/0x60
[12394.776551]  netif_receive_skb+0x43/0x140
[12394.777140]  ? fdb_find_rcu+0xb1/0x130 [bridge]
[12394.777769]  br_pass_frame_up+0x151/0x190 [bridge]
[12394.778382]  br_handle_frame_finish+0x1a5/0x520 [bridge]
[12394.778981]  ? __nf_ct_refresh_acct+0x55/0x60 [nf_conntrack]
[12394.779589]  ? nf_conntrack_tcp_packet+0x61f/0xf60 [nf_conntrack]
[12394.780171]  ? br_pass_frame_up+0x190/0x190 [bridge]
[12394.780758]  br_nf_hook_thresh+0xe1/0x120 [br_netfilter]
[12394.781337]  ? br_pass_frame_up+0x190/0x190 [bridge]
[12394.781937]  br_nf_pre_routing_finish+0x16e/0x430 [br_netfilter]
[12394.782517]  ? br_pass_frame_up+0x190/0x190 [bridge]
[12394.783122]  ? nf_nat_ipv4_pre_routing+0x4a/0xc0 [nf_nat]
[12394.783755]  br_nf_pre_routing+0x245/0x550 [br_netfilter]
[12394.784323]  ? tcp_write_xmit+0x690/0xb10
[12394.784872]  ? br_nf_forward_arp+0x320/0x320 [br_netfilter]
[12394.785424]  br_handle_frame+0x211/0x3c0 [bridge]
[12394.785995]  ? fib_multipath_hash+0x4a0/0x6a0
[12394.786535]  ? br_pass_frame_up+0x190/0x190 [bridge]
[12394.787075]  ? br_handle_frame_finish+0x520/0x520 [bridge]
[12394.787615]  __netif_receive_skb_core.constprop.0+0x23a/0xef0
[12394.788148]  ? ip_rcv+0x16f/0x190
[12394.788718]  __netif_receive_skb_one_core+0x3f/0xa0
[12394.789306]  __netif_receive_skb+0x15/0x60
[12394.789831]  process_backlog+0x9e/0x170
[12394.790353]  __napi_poll+0x33/0x190
[12394.790860]  net_rx_action+0x126/0x280
[12394.791351]  __do_softirq+0xd9/0x2e7
[12394.791846]  do_softirq+0x7d/0xb0
[12394.792350]  </IRQ>
[12394.792855]  <TASK>
[12394.793338]  __local_bh_enable_ip+0x54/0x60
[12394.793830]  ip_finish_output2+0x1a2/0x580
[12394.794331]  __ip_finish_output+0xb7/0x180
[12394.794823]  ip_finish_output+0x2e/0xc0
[12394.795316]  ip_output+0x78/0x100
[12394.795803]  ? __ip_finish_output+0x180/0x180
[12394.796322]  ip_local_out+0x5e/0x70
[12394.796816]  __ip_queue_xmit+0x180/0x440
[12394.797311]  ? page_counter_cancel+0x2e/0x80
[12394.797820]  ip_queue_xmit+0x15/0x20
[12394.798322]  __tcp_transmit_skb+0x8dd/0xa00
[12394.798813]  tcp_write_xmit+0x3ab/0xb10
[12394.799303]  ? __check_object_size.part.0+0x4a/0x150
[12394.799808]  __tcp_push_pending_frames+0x37/0x100
[12394.800308]  tcp_push+0xd6/0x100
[12394.800806]  tcp_sendmsg_locked+0x883/0xc80
[12394.801303]  tcp_sendmsg+0x2d/0x50
[12394.801793]  inet_sendmsg+0x43/0x80
[12394.802302]  sock_sendmsg+0x62/0x70
[12394.802787]  sock_write_iter+0x93/0xf0
[12394.803277]  new_sync_write+0x193/0x1b0
[12394.803770]  vfs_write+0x1d5/0x270
[12394.804276]  ksys_write+0xb5/0xf0
[12394.804737]  ? syscall_trace_enter.constprop.0+0xa7/0x1c0
[12394.805205]  __x64_sys_write+0x19/0x20
[12394.805665]  do_syscall_64+0x5c/0xc0
[12394.806129]  ? syscall_exit_to_user_mode+0x27/0x50
[12394.806592]  ? do_syscall_64+0x69/0xc0
[12394.807059]  ? do_syscall_64+0x69/0xc0
[12394.807549]  entry_SYSCALL_64_after_hwframe+0x61/0xcb
[12394.808008] RIP: 0033:0x7f250ea593ad
[12394.808499] Code: c3 8b 07 85 c0 75 24 49 89 fb 48 89 f0 48 89 d7 48 89 ce 4c 89 c2 4d 89 ca 4c 8b 44 24 08 4c 8b 4c 24 10 4c 89 5c 24 08 0f 05 <c3> e9 8a d2 ff ff 41 54 b8 02 00 00 00 49 89 f4 be 00 88 08 00 55
[12394.809442] RSP: 002b:00007ffea08ec188 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[12394.809945] RAX: ffffffffffffffda RBX: 00007f250ea9ab48 RCX: 00007f250ea593ad
[12394.810440] RDX: 00000000000000a2 RSI: 00007f250e79c810 RDI: 0000000000000009
[12394.810933] RBP: 00007f250e7d7e80 R08: 0000000000000000 R09: 0000000000000000
[12394.811451] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000001
[12394.811938] R13: 000000000000009f R14: 0000000000000000 R15: 00007f250e7d7e80
[12394.812449]  </TASK>
[12394.812930] Modules linked in: xt_nat xt_tcpudp veth xt_conntrack nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack_netlink nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xfrm_user xfrm_algo nft_counter xt_addrtype nft_compat nf_tables nfnetlink br_netfilter bridge stp llc overlay sch_fq_codel joydev input_leds cp210x serio_raw usbserial cdc_acm qemu_fw_cfg mac_hid dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua efi_pstore pstore_blk mtd ramoops netconsole reed_solomon ipmi_devintf ipmi_msghandler msr pstore_zone ip_tables x_tables autofs4 btrfs blake2b_generic zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear hid_generic bochs drm_vram_helper drm_ttm_helper ttm psmouse drm_kms_helper usbhid syscopyarea sysfillrect virtio_net sysimgblt fb_sys_fops net_failover failover cec hid rc_core virtio_scsi drm i2c_piix4 pata_acpi floppy
[12394.817596] CR2: 0000000000000045
[12394.818324] ---[ end trace 2596706ab1b3b338 ]---
[12394.819007] RIP: 0010:asm_exc_general_protection+0x4/0x30
[12394.819695] Code: c4 48 8d 6c 24 01 48 89 e7 48 8b 74 24 78 48 c7 44 24 78 ff ff ff ff e8 ea 7f f9 ff e9 05 0b 00 00 0f 1f 44 00 00 0f 1f 00 e8 <c8> 09 00 00 48 89 c4 48 8d 6c 24 01 48 89 e7 48 8b 74 24 78 48 c7
[12394.821094] RSP: 0018:ffffa7498342f010 EFLAGS: 00010046
[12394.821847] RAX: 0000000000000000 RBX: 0000000000000015 RCX: 0000000000000001
[12394.822622] RDX: ffff8fed49a6ed00 RSI: ffff8fed4b178000 RDI: ffff8fec418a9400
[12394.823371] RBP: ffffa7498342f8b0 R08: 0000000000000015 R09: ffff8fed4b1780a8
[12394.824113] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8fed57e4f180
[12394.824874] R13: 0000000000004000 R14: 0000000000000015 R15: 0000000000000001
[12394.825623] FS:  00007f250ea9ab48(0000) GS:ffff8fed7bc00000(0000) knlGS:0000000000000000
[12394.826391] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[12394.827160] CR2: 0000000000000045 CR3: 0000000026064000 CR4: 00000000000006f0
[12394.827934] Kernel panic - not syncing: Fatal exception in interrupt
[12394.828901] Kernel Offset: 0x8200000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
[12394.829699] ---[ end Kernel panic - not syncing: Fatal exception in interrupt ]---
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!