VM freezes irregularly

thimplicity

Member
Feb 4, 2022
48
7
8
42
Hi everyone,
I have rewritten the text based on the troubleshooting I have tried. I am at my wit's end here:

Some weeks ago, I bought a pfsense box on AliExpress (4-core N5105, 8GB RAM and 250GB NVMe) and installed Proxmox on it. On the box I run two VMs:
  1. pfSense - runs excellent and no problems at all so far (knock on wood) - 4 cores, 4GB RAM
  2. an ubuntu VM with docker installed. 2 cores, 2 GB RAM. Three docker containers: homebridge, scrypted and pihole
This is where the problems on 2. began. The installation of docker etc worked well and when the VM runs, it runs well. I followed this (https://www.geeksforgeeks.org/create-your-own-secure-home-network-using-pi-hole-and-docker/) guide to install pihole on ubuntu. Unfortunately 2. freezes on an irregular basis every few hours and I am not successful so far with troubleshooting. Interestingly, I have a failover-pihole running on my homelab server (also on docker) which is rock-solid as well.

I tried to troubleshoot with syslog and kernlog on ubuntu. These are the last entries before the freeze:

Code:
Jun 29 22:46:09 docker-essentials systemd[1]: run-docker-runtime\x2drunc-moby-e309dd491ff3cde934e7a1e9b2f2e183ecdb922fe3bd6d99e781b04081202a10-runc.t0xcBJ.mount: Deactivated successfully.
Jun 29 22:46:39 docker-essentials systemd[1]: run-docker-runtime\x2drunc-moby-e309dd491ff3cde934e7a1e9b2f2e183ecdb922fe3bd6d99e781b04081202a10-runc.QDGOAI.mount: Deactivated successfully.
Jun 29 22:47:09 docker-essentials systemd[1]: run-docker-runtime\x2drunc-moby-e309dd491ff3cde934e7a1e9b2f2e183ecdb922fe3bd6d99e781b04081202a10-runc.PXxlMG.mount: Deactivated successfully.
Jun 29 22:47:39 docker-essentials systemd[1]: run-docker-runtime\x2drunc-moby-e309dd491ff3cde934e7a1e9b2f2e183ecdb922fe3bd6d99e781b04081202a10-runc.SvRAlA.mount: Deactivated successfully.
Jun 29 22:48:09 docker-essentials systemd[1]: run-docker-runtime\x2drunc-moby-e309dd491ff3cde934e7a1e9b2f2e183ecdb922fe3bd6d99e781b04081202a10-runc.strj33.mount: Deactivated successfully.
Jun 29 22:48:39 docker-essentials systemd[1]: run-docker-runtime\x2drunc-moby-e309dd491ff3cde934e7a1e9b2f2e183ecdb922fe3bd6d99e781b04081202a10-runc.tdkCda.mount: Deactivated successfully.
Jun 29 22:49:10 docker-essentials systemd[1]: run-docker-runtime\x2drunc-moby-e309dd491ff3cde934e7a1e9b2f2e183ecdb922fe3bd6d99e781b04081202a10-runc.Pcu6rh.mount: Deactivated successfully.
Jun 29 22:49:40 docker-essentials systemd[1]: run-docker-runtime\x2drunc-moby-e309dd491ff3cde934e7a1e9b2f2e183ecdb922fe3bd6d99e781b04081202a10-runc.4kwDNY.mount: Deactivated successfully.
Jun 29 22:51:10 docker-essentials systemd[1]: run-docker-runtime\x2drunc-moby-e309dd491ff3cde934e7a1e9b2f2e183ecdb922fe3bd6d99e781b04081202a10-runc.cvupnU.mount: Deactivated successfully.
The last 50 lines are like this.


Code:
Jun 29 22:58:48 docker-essentials kernel: [   11.872813] IPv6: ADDRCONF(NETDEV_CHANGE): veth00540dc: link becomes ready
Jun 29 22:58:48 docker-essentials kernel: [   11.872865] docker0: port 1(veth00540dc) entered blocking state
Jun 29 22:58:48 docker-essentials kernel: [   11.872869] docker0: port 1(veth00540dc) entered forwarding state
Jun 29 22:58:48 docker-essentials kernel: [   11.872896] IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes ready
Jun 29 22:58:48 docker-essentials kernel: [   11.910686] eth0: renamed from vetha7c4e76
Jun 29 22:58:48 docker-essentials kernel: [   11.925447] IPv6: ADDRCONF(NETDEV_CHANGE): veth7697b43: link becomes ready
Jun 29 22:58:48 docker-essentials kernel: [   11.925491] br-cbb501991c36: port 1(veth7697b43) entered blocking state
Jun 29 22:58:48 docker-essentials kernel: [   11.925494] br-cbb501991c36: port 1(veth7697b43) entered forwarding state
Jun 29 22:58:48 docker-essentials kernel: [   11.925522] IPv6: ADDRCONF(NETDEV_CHANGE): br-cbb501991c36: link becomes ready
Jun 29 23:04:04 docker-essentials kernel: [  327.023940] loop3: detected capacity change from 0 to 96160
Jun 29 23:04:05 docker-essentials kernel: [  327.897170] kauditd_printk_skb: 10 callbacks suppressed
Jun 29 23:04:05 docker-essentials kernel: [  327.897175] audit: type=1400 audit(1656543845.392:32): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/snap/snapd/16010/usr/lib/snapd/snap-confine" pid=3079 comm="apparmor_parser"
Jun 29 23:04:05 docker-essentials kernel: [  327.898324] audit: type=1400 audit(1656543845.392:33): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/snap/snapd/16010/usr/lib/snapd/snap-confine//mount-namespace-capture-helper" pid=3079 comm="apparmor_parser"
Jun 29 23:04:05 docker-essentials kernel: [  327.929722] audit: type=1400 audit(1656543845.424:34): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="snap-update-ns.lxd" pid=3081 comm="apparmor_parser"
Jun 29 23:04:05 docker-essentials kernel: [  327.940928] audit: type=1400 audit(1656543845.436:35): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="snap.lxd.activate" pid=3082 comm="apparmor_parser"
Jun 29 23:04:05 docker-essentials kernel: [  327.950531] audit: type=1400 audit(1656543845.448:36): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="snap.lxd.benchmark" pid=3083 comm="apparmor_parser"
Jun 29 23:04:05 docker-essentials kernel: [  327.960113] audit: type=1400 audit(1656543845.456:37): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="snap.lxd.buginfo" pid=3084 comm="apparmor_parser"
Jun 29 23:04:05 docker-essentials kernel: [  327.969642] audit: type=1400 audit(1656543845.464:38): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="snap.lxd.check-kernel" pid=3085 comm="apparmor_parser"
Jun 29 23:04:05 docker-essentials kernel: [  327.982677] audit: type=1400 audit(1656543845.480:39): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="snap.lxd.daemon" pid=3086 comm="apparmor_parser"
Jun 29 23:04:05 docker-essentials kernel: [  327.993803] audit: type=1400 audit(1656543845.488:40): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="snap.lxd.hook.configure" pid=3087 comm="apparmor_parser"
Jun 29 23:04:05 docker-essentials kernel: [  328.002560] audit: type=1400 audit(1656543845.500:41): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="snap.lxd.hook.install" pid=3088 comm="apparmor_parser"
Jun 29 23:04:07 docker-essentials kernel: [  329.540832] loop4: detected capacity change from 0 to 8
Jun 29 23:04:10 docker-essentials kernel: [  333.037343] kauditd_printk_skb: 17 callbacks suppressed
Jun 29 23:04:10 docker-essentials kernel: [  333.037347] audit: type=1400 audit(1656543850.532:59): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.lxc-to-lxd" pid=3303 comm="apparmor_parser"
Jun 29 23:04:10 docker-essentials kernel: [  333.223915] audit: type=1400 audit(1656543850.720:60): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.lxd" pid=3304 comm="apparmor_parser"
Jun 29 23:04:10 docker-essentials kernel: [  333.406204] audit: type=1400 audit(1656543850.900:61): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.migrate" pid=3306 comm="apparmor_parser"
Jun 29 23:04:11 docker-essentials kernel: [  333.641959] audit: type=1400 audit(1656543851.136:62): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.user-daemon" pid=3307 comm="apparmor_parser"
Jun 29 23:04:11 docker-essentials kernel: [  333.649343] audit: type=1400 audit(1656543851.144:63): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="snap-update-ns.lxd" pid=3309 comm="apparmor_parser"
Jun 29 23:04:17 docker-essentials kernel: [  340.436745] loop4: detected capacity change from 0 to 126824

Also just before the VM froze, I took the following screenshot from the proxmox console GUI

Screen Shot 2022-06-29 at 8.30.38 PM.png

Interestingly, when I cat syslog on the pihole-failover VM (runs on a different server). it shows a very similar message:

Code:
Jun 29 19:52:26 docker-internal systemd[1]: run-docker-runtime\x2drunc-moby-6ee5ef6eaf490c72d3cd159875d64e02736a38d6cae095b91574fcf309c816c5-runc.fZzgRh.mount: Succeeded.
Jun 29 19:52:48 docker-internal systemd[1]: run-docker-runtime\x2drunc-moby-63234eaffbbf4d5c9236c82f3a77804081223f52c803bee6a374f14c0652f9cd-runc.Sgt0Z1.mount: Succeeded.
Jun 29 19:52:56 docker-internal systemd[1]: run-docker-runtime\x2drunc-moby-6ee5ef6eaf490c72d3cd159875d64e02736a38d6cae095b91574fcf309c816c5-runc.TewEWn.mount: Succeeded.
Jun 29 19:53:27 docker-internal systemd[1]: run-docker-runtime\x2drunc-moby-6ee5ef6eaf490c72d3cd159875d64e02736a38d6cae095b91574fcf309c816c5-runc.zvoXqg.mount: Succeeded.
Jun 29 19:54:18 docker-internal systemd[1]: run-docker-runtime\x2drunc-moby-63234eaffbbf4d5c9236c82f3a77804081223f52c803bee6a374f14c0652f9cd-runc.r4uOkZ.mount: Succeeded.
Jun 29 19:54:27 docker-internal systemd[1]: run-docker-runtime\x2drunc-moby-6ee5ef6eaf490c72d3cd159875d64e02736a38d6cae095b91574fcf309c816c5-runc.b6k0cn.mount: Succeeded.
Jun 29 19:55:18 docker-internal systemd[1]: run-docker-runtime\x2drunc-moby-63234eaffbbf4d5c9236c82f3a77804081223f52c803bee6a374f14c0652f9cd-runc.S6LFA2.mount: Succeeded.
Jun 29 19:55:27 docker-internal systemd[1]: run-docker-runtime\x2drunc-moby-6ee5ef6eaf490c72d3cd159875d64e02736a38d6cae095b91574fcf309c816c5-runc.theqaA.mount: Succeeded.
Jun 29 19:56:17 docker-internal systemd[1]: run-docker-runtime\x2drunc-moby-ff503080b2856226d5021da394bc6c7eb5322a0fff3299edc3a26e99f49a9fca-runc.ZQg3Ar.mount: Succeeded.
Jun 29 19:56:48 docker-internal systemd[1]: run-docker-runtime\x2drunc-moby-63234eaffbbf4d5c9236c82f3a77804081223f52c803bee6a374f14c0652f9cd-runc.W0g7YX.mount: Succeeded.

agent: 1
balloon: 0
boot: order=scsi0;ide2;net0
cores: 1
cpu: kvm64,flags=-aes
ide2: none,media=cdrom
memory: 1024
meta: creation-qemu=6.2.0,ctime=1656517258
name: docker-essentials
net0: virtio=D6:D8:88:0B:6F:36,bridge=vmbr0,firewall=1,tag=30
numa: 0
onboot: 1
ostype: l26
parent: de_20220628_3
scsi0: local-lvm:vm-140-disk-0,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=8d542a7b-8c8f-4053-abe9-1287fc9d7bdb
sockets: 1
vmgenid: 5ad56922-6df1-4e6e-822f-5b49b1415f15

proxmox-ve: 7.2-1 (running kernel: 5.15.30-2-pve)
pve-manager: 7.2-3 (running version: 7.2-3/c743d6c1)
pve-kernel-helper: 7.2-2
pve-kernel-5.15: 7.2-1
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-8
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-6
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.2-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.1.8-1
proxmox-backup-file-restore: 2.1.8-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-10
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-1
pve-ha-manager: 3.3-4
pve-i18n: 2.7-1
pve-qemu-kvm: 6.2.0-5
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1

Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 156
Model name: Intel(R) Celeron(R) N5105 @ 2.00GHz
Stepping: 0
CPU MHz: 2000.000
CPU max MHz: 2900.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Virtualization: VT-x
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 1.5 MiB
L3 cache: 4 MiB
NUMA node0 CPU(s): 0-3
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc art arch_perfmon pebs bts rep
_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadli
ne_timer aes xsave rdrand lahf_lm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust smep
erms rdt_a rdseed smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip
waitpkg gfni rdpid movdiri movdir64b md_clear flush_l1d arch_capabilities


On the freezing one it says "Deactivated successfully." whereas on the working one it says "Succeeded.". I do not know whether that is just a difference in versions, but "Succeeded." sounds more positive.

I hope someone has an idea. I did a clean installation with the same components, settings and commands today with the same result.
 
Last edited:

Dunuin

Famous Member
Jun 30, 2020
7,327
1,767
149
Germany
Best you find out when the VM is crashing and then look whats written in the syslog ("/var/log/syslog" on PVE host, might be another file in your ubuntu VM) at that time.
 
  • Like
Reactions: thimplicity

thimplicity

Member
Feb 4, 2022
48
7
8
42
I updated the text with the latest troubleshooting results. I hope someone has an idea. Based on another thread here in the forum I disabled RAM ballooning for the VM as well.

Edit:
added
  • cat /etc/pve/qemu-server/<vmid>.conf
  • pveversion -v
  • lscpu
 
Last edited:

sam007

New Member
Jun 28, 2022
10
1
3
me too
I have encountered this issues , somehow freeze, on ubuntu20.04 vm with kernel 5.4 as well
openmediavault and openwrt vm is fine
windows vm is fine
PVE itself is fine
I used N5105 processor

1658806309706.png
 
Last edited:

BarTouZ

Member
Jul 23, 2022
38
13
8
Hello,

I'm having exactly the same problem...

I migrated my vms from an old Intel j1900 nuc to this new Intel n5105 and the vms stop/freeze at different durations.

sometimes it's after 1 hour and sometimes it takes more than 13-14 hours... have you found a solution?

for information my vm PfSense is not impacted but only the vms under Ubuntu server 20.04 with docker...

Thank you for your help
 

thimplicity

Member
Feb 4, 2022
48
7
8
42
Hello,

I'm having exactly the same problem...

I migrated my vms from an old Intel j1900 nuc to this new Intel n5105 and the vms stop/freeze at different durations.

sometimes it's after 1 hour and sometimes it takes more than 13-14 hours... have you found a solution?

for information my vm PfSense is not impacted but only the vms under Ubuntu server 20.04 with docker...

Thank you for your help
I only run pfSense for now. I had one additional reboot (not freeze), but I have the feeling it has to do with the temperatures. I have now a fan blowing onto the pfSense box. No more freezes or reboots since then, but it has only been a day. I try to keep the temps below 50 (celsius). I read online that some people think the N5105 is easy to irritate with higher temps. It might also be the build quality of the AliExpress box. It looks fine to me, but I am not an expert.

Overall, it seems the change to host for CPU type and updating the distr helped. The only thing I do not understand is that a VM freezes or reboots and the host os, in this case proxmox, does not have any problems

what are your pfSense settings? Maybe I can steal something to increase stability on my side
 
Last edited:

BarTouZ

Member
Jul 23, 2022
38
13
8
I have like you, a fan under the n5105 box, which before cooled my j1900... the temperature is in the 50-55 degrees.

moreover, the cpu is at less than 10% utilization (ridiculous). here i have my vm which froze after 7h of use.

I also don't understand that the VM crashes but not proxmox and has been running for 4 days now with no problem...

for the configuration of PfSense, I will send you all that when I return from vacation.
 
  • Like
Reactions: thimplicity

thimplicity

Member
Feb 4, 2022
48
7
8
42
I have like you, a fan under the n5105 box, which before cooled my j1900... the temperature is in the 50-55 degrees.

moreover, the cpu is at less than 10% utilization (ridiculous). here i have my vm which froze after 7h of use.

I also don't understand that the VM crashes but not proxmox and has been running for 4 days now with no problem...

for the configuration of PfSense, I will send you all that when I return from vacation.
My impression is that at least my N5105 is sensitive to higher temperatures. I runs fine since I put a fan directly onto the box.
 

BarTouZ

Member
Jul 23, 2022
38
13
8
hello,

can you give the procedure to downgrade/upgrade the Ubuntu kernel please?

would this explain why PfSense works without too much problem for once ?

thanks
 
Last edited:

thimplicity

Member
Feb 4, 2022
48
7
8
42
Hi everyone,
a little update:
- the proxmox server runs rock-solid for 10 days now.
- the pfSense VM has frozen once after about 4 days and rebooted itself after another 4 days.

I am not sure what else to try. I put a fan on top of the box, so that should not be the problem. It is also weird to think about it as a hardware problem, as proxmox runs so well.
 
  • Like
Reactions: BarTouZ

gyrex

Member
Jul 19, 2022
85
17
8
I think I've got the same or similar issue: https://forum.proxmox.com/threads/proxmox-vm-crash-freeze.113177/

Screenshot of my frozen console:

Screen Shot 2022-08-04 at 10.46.29 am.png

This other thread seems to be demonstrating the same or similar issue too: https://forum.proxmox.com/threads/vms-freezing-randomly.113037/

Could this be a common issue?

I'm also running an Intel N5105 on my Proxmox server. I've changed my CPU governor to 'powersave' mode and the temperature sits at or around 55C so I don't think it's a temperature issue. It's happened twice to me now. The VM's were originally running on a J4125 and would run for days without issue. Could this be an issue with the linux kernel and the N5105 CPU?

Code:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   39 bits physical, 48 bits virtual
CPU(s):                          4
On-line CPU(s) list:             0-3
Thread(s) per core:              1
Core(s) per socket:              4
Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       GenuineIntel
CPU family:                      6
Model:                           156
Model name:                      Intel(R) Celeron(R) N5105 @ 2.00GHz
Stepping:                        0
CPU MHz:                         2000.000
CPU max MHz:                     2900.0000
CPU min MHz:                     800.0000
BogoMIPS:                        3993.60
Virtualization:                  VT-x
L1d cache:                       128 KiB
L1i cache:                       128 KiB
L2 cache:                        1.5 MiB
L3 cache:                        4 MiB
NUMA node0 CPU(s):               0-3
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Vulnerable: Clear CPU buffers attempted, no microcode; SMT disabled
Vulnerability Retbleed:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds:             Vulnerable: No microcode
Vulnerability Tsx async abort:   Not affected
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology
                                  nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowp
                                 refetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust smep erms rdt_a rdseed smap clflushopt clwb intel_pt sha_ni xsave
                                 opt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip waitpkg gfni rdpid movdiri movdir64b md_clear flush_l1d arch_capabilities
 
Last edited:

thimplicity

Member
Feb 4, 2022
48
7
8
42
I'm also running an Intel N5105 on my Proxmox server. I've changed my CPU governor to 'powersave' mode and the temperature sits at or around 55C so I don't think it's a temperature issue.
How do you set this powersave mode?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!