Host hard crashes, PVE 8.1.4

I have similar N100 minipc from CWWK, but dual NIC version, and can confirm similar issue. It just crashes during night, and only way to make it live again is reboot. Funny thing that it only happened on Proxmox/Debian, and that same box was running rock solid on Unraid for 30 days (trial), with basically same load with bunch of containers running.
 
I suspected long time heat issues as the device is cooled passively, but removing the hood for better air circulation, did not improve anything.

I started the device again, keeping an eye on the log…when starting a Windows VM installation, a blue screen occurred…in the log: „CPU locked“
Any relevant conclusions here?

I'm currently testing Proxmox 7.4 on the buggy unit at the moment, could you please do the same so that we share results?
@Skedaddle do you have time to run the test in paralell?
I have similar N100 minipc from CWWK, but dual NIC version, and can confirm similar issue. It just crashes during night, and only way to make it live again is reboot. Funny thing that it only happened on Proxmox/Debian, and that same box was running rock solid on Unraid for 30 days (trial), with basically same load with bunch of containers running.
@mixedd please let us know the pveversion and kernel. I urge you too to try installing an older kernel so that we can test in paralell. I will continue testing as well during the weekend with 5.15.136 under pve 7.4.17 as well as with the 'newest' 6.2 on pve 8.1
 
Any relevant conclusions here?


@Skedaddle do you have time to run the test in paralell?

@mixedd please let us know the pveversion and kernel. I urge you too to try installing an older kernel so that we can test in paralell. I will continue testing as well during the weekend with 5.15.136 under pve 7.4.17 as well as with the 'newest' 6.2 on pve 8.1
Actually I'm still testing and trying to reproduce that issue.
So far changed enp0 to autostart, which was enabled only on vmbr0, and following one topic where someone had issues with Jumbo Frames, I disabled them on my switch in Unifi controller.

If I will be able to reproduce this issue, I will drop all relevant info I will able to find. Need to check tomorrow morning after some scheduled nightly Plex tasks and other things will run as it was in first case.
 
Any relevant conclusions here?


@Skedaddle do you have time to run the test in paralell?

@mixedd please let us know the pveversion and kernel. I urge you too to try installing an older kernel so that we can test in paralell. I will continue testing as well during the weekend with 5.15.136 under pve 7.4.17 as well as with the 'newest' 6.2 on pve 8.1
So in the end my issues turned out to be power supply related, managed to reproduce couple of times during nightly Plex library scan where it also does intro/credit detection, so in other words moderate/heavy load for that little N100 box. When it was on my table for repaste (tought about overheating at first) tought to look at PSU, and was shocked by that it's only 35W, so combine 15W from N100 at full load togheter with older 7200 SATA disk looks like fine recepy for disaster.
 
New to Proxmox here and having similar issues with a Topton N100 mini pc firewall. 32gb ram 1tb nvme. Initially it was repeatedly crashing within a few minutes when running HASSIO which was fixed by disabling split lock detection. After this fix things improved, it will run for days at a time (longest run 10 days) and then hang, power cycle needed. Temps are 48°-51°c. Topton suggested I get a fan which is on order but after reading the comments here I don't think it will help. I tried disabling C-states but the bios has no such option. I suspect it happens when the web gui shell is open, I have to do more testing to confirm.

System log shows hard and soft lockups which doesn't seem too great. Any advice would be much appreciated, I don't know where to go from here except try to return the unit which is a shame since other than not working it checks all the boxes for my application.

Mar 24 18:28:23 proxmox kernel: CPU: 0 PID: 224936 Comm: pvedaemon worke Tainted: P B D O 6.5.13-1-pve #1
Mar 24 18:28:23 proxmox kernel: Hardware name: Default string Default string/Default string, BIOS 5.27 09/28/2023
Mar 24 18:28:23 proxmox kernel: Call Trace:
Mar 24 18:28:23 proxmox kernel: <TASK>
Mar 24 18:28:23 proxmox kernel: dump_stack_lvl+0x48/0x70
Mar 24 18:28:23 proxmox kernel: dump_stack+0x10/0x20
Mar 24 18:28:23 proxmox kernel: __schedule_bug+0x64/0x80
Mar 24 18:28:23 proxmox kernel: __schedule+0x100d/0x1440
Mar 24 18:28:23 proxmox kernel: ? vprintk+0x42/0x80
Mar 24 18:28:23 proxmox kernel: ? _printk+0x60/0x90
Mar 24 18:28:23 proxmox kernel: do_task_dead+0x44/0x50
Mar 24 18:28:23 proxmox kernel: make_task_dead+0x15a/0x180
Mar 24 18:28:23 proxmox kernel: rewind_stack_and_make_dead+0x17/0x20
Mar 24 18:28:23 proxmox kernel: RIP: 0033:0x7dcfe3328349
Mar 24 18:28:23 proxmox kernel: Code: Unable to access opcode bytes at 0x7dcfe332831f.
Mar 24 18:28:23 proxmox kernel: RSP: 002b:00007ffc7e5cd588 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
Mar 24 18:28:23 proxmox kernel: RAX: ffffffffffffffda RBX: 00007dcfe34229e0 RCX: 00007dcfe3328349
Mar 24 18:28:23 proxmox kernel: RDX: 000000000000003c RSI: 00000000000000e7 RDI: 0000000000000000
Mar 24 18:28:23 proxmox kernel: RBP: 0000000000000000 R08: ffffffffffffff78 R09: 00007dcfe342dac0
Mar 24 18:28:23 proxmox kernel: R10: 00007dcfe325e320 R11: 0000000000000246 R12: 00007dcfe34229e0
Mar 24 18:28:23 proxmox kernel: R13: 00007dcfe34282e0 R14: 00000000000001ab R15: 00007dcfe34282c8
Mar 24 18:28:23 proxmox kernel: </TASK>
Mar 24 18:30:40 proxmox kernel: watchdog: Watchdog detected hard LOCKUP on cpu 1
Mar 24 18:30:40 proxmox kernel: Modules linked in: udp_diag tcp_diag inet_diag cfg80211 veth ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter bpfilter nf_tables bonding tls sunrpc nfnetlink_log binfmt_misc nfnetlink intel_rapl_msr intel_rapl_common x86_pkg_temp_thermal intel_powerclamp snd_hda_codec_hdmi coretemp kvm_intel kvm snd_sof_pci_intel_tgl snd_sof_intel_hda_common soundwire_intel irqbypass crct10dif_pclmul polyval_clmulni polyval_generic snd_sof_intel_hda_mlink ghash_clmulni_intel sha256_ssse3 soundwire_cadence sha1_ssse3 aesni_intel snd_sof_intel_hda snd_sof_pci crypto_simd i915 cryptd snd_sof_xtensa_dsp snd_sof snd_sof_utils snd_soc_hdac_hda snd_hda_ext_core snd_soc_acpi_intel_match mei_pxp mei_hdcp snd_soc_acpi soundwire_generic_allocation soundwire_bus snd_soc_core snd_compress ac97_bus snd_pcm_dmaengine snd_hda_intel snd_intel_dspcfg snd_intel_sdw_acpi snd_hda_codec rapl snd_hda_core snd_hwdep drm_buddy ttm snd_pcm drm_display_helper snd_timer intel_cstate wmi_bmof pcspkr cec
Mar 24 18:30:40 proxmox kernel: snd cmdlinepart rc_core soundcore spi_nor mei_me drm_kms_helper mtd mei i2c_algo_bit acpi_tad acpi_pad mac_hid zfs(PO) spl(O) vhost_net vhost vhost_iotlb tap drm efi_pstore dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b_generic xor raid6_pq simplefb dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio libcrc32c nvme crc32_pclmul xhci_pci xhci_pci_renesas nvme_core spi_intel_pci i2c_i801 spi_intel i2c_smbus nvme_common igc xhci_hcd ahci libahci video wmi
Mar 24 18:30:40 proxmox kernel: CPU: 1 PID: 2250658 Comm: qm Tainted: P B D W O 6.5.13-1-pve #1
Mar 24 18:30:40 proxmox kernel: Hardware name: Default string Default string/Default string, BIOS 5.27 09/28/2023
Mar 24 18:30:40 proxmox kernel: RIP: 0010:native_queued_spin_lock_slowpath+0x7f/0x2d0
Mar 24 18:30:40 proxmox kernel: Code: 00 00 f0 0f ba 2b 08 0f 92 c2 8b 03 0f b6 d2 c1 e2 08 30 e4 09 d0 3d ff 00 00 00 77 5f 85 c0 74 10 0f b6 03 84 c0 74 09 f3 90 <0f> b6 03 84 c0 75 f7 b8 01 00 00 00 66 89 03 5b 41 5c 41 5d 41 5e
Mar 24 18:30:40 proxmox kernel: RSP: 0018:ffffc0494a12ba90 EFLAGS: 00000002
Mar 24 18:30:40 proxmox kernel: RAX: 0000000000000001 RBX: ffff99b18f721050 RCX: 0000000000000000
Mar 24 18:30:40 proxmox kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff99b18f721050
Mar 24 18:30:40 proxmox kernel: RBP: ffffc0494a12bab0 R08: 0000000000000000 R09: 0000000000000000
Mar 24 18:30:40 proxmox kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000246
Mar 24 18:30:40 proxmox kernel: R13: 0000000000000000 R14: ffffc0494a12bc60 R15: 0000000000000000
Mar 24 18:30:40 proxmox kernel: FS: 00007b7e4100f740(0000) GS:ffff99b8dfa80000(0000) knlGS:0000000000000000
Mar 24 18:30:40 proxmox kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Mar 24 18:30:40 proxmox kernel: CR2: 00007b7e3f20afc0 CR3: 00000003208aa000 CR4: 0000000000752ee0
Mar 24 18:30:40 proxmox kernel: PKRU: 55555554
Mar 24 18:30:40 proxmox kernel: Call Trace:
Mar 24 18:30:40 proxmox kernel: <NMI>
Mar 24 18:30:40 proxmox kernel: ? show_regs+0x6d/0x80
Mar 24 18:30:40 proxmox kernel: ? watchdog_hardlockup_check+0x10c/0x1e0
Mar 24 18:30:40 proxmox kernel: ? watchdog_overflow_callback+0x6b/0x80
Mar 24 18:30:40 proxmox kernel: ? __perf_event_overflow+0x119/0x380
Mar 24 18:30:40 proxmox kernel: ? perf_event_overflow+0x19/0x30
Mar 24 18:30:40 proxmox kernel: ? handle_pmi_common+0x175/0x3f0
Mar 24 18:30:40 proxmox kernel: ? intel_pmu_handle_irq+0x11f/0x480
Mar 24 18:30:40 proxmox kernel: ? perf_event_nmi_handler+0x2b/0x50
Mar 24 18:30:40 proxmox kernel: ? nmi_handle+0x5d/0x160
Mar 24 18:30:40 proxmox kernel: ? default_do_nmi+0x47/0x130
Mar 24 18:30:40 proxmox kernel: ? exc_nmi+0x1d5/0x2a0
Mar 24 18:30:40 proxmox kernel: ? end_repeat_nmi+0x16/0x67
Mar 24 18:30:40 proxmox kernel: ? native_queued_spin_lock_slowpath+0x7f/0x2d0
Mar 24 18:30:40 proxmox kernel: ? native_queued_spin_lock_slowpath+0x7f/0x2d0
Mar 24 18:30:40 proxmox kernel: ? native_queued_spin_lock_slowpath+0x7f/0x2d0
Mar 24 18:30:40 proxmox kernel: </NMI>
Mar 24 18:30:40 proxmox kernel: <TASK>
Mar 24 18:30:40 proxmox kernel: _raw_spin_lock_irqsave+0x5c/0x80
Mar 24 18:30:40 proxmox kernel: folio_lruvec_lock_irqsave+0x60/0xa0
Mar 24 18:30:40 proxmox kernel: release_pages+0x269/0x4c0
Mar 24 18:30:40 proxmox kernel: ? unlink_anon_vmas+0x14b/0x1c0
Mar 24 18:30:40 proxmox kernel: free_pages_and_swap_cache+0x4a/0x60
Mar 24 18:30:40 proxmox kernel: tlb_batch_pages_flush+0x43/0x80
Mar 24 18:30:40 proxmox kernel: tlb_finish_mmu+0x73/0x1a0
Mar 24 18:30:40 proxmox kernel: unmap_region+0x119/0x160
Mar 24 18:30:40 proxmox kernel: do_vmi_align_munmap+0x37f/0x550
Mar 24 18:30:40 proxmox kernel: do_vmi_munmap+0xdf/0x190
Mar 24 18:30:40 proxmox kernel: __vm_munmap+0xae/0x180
Mar 24 18:30:40 proxmox kernel: __x64_sys_munmap+0x27/0x40
Mar 24 18:30:40 proxmox kernel: do_syscall_64+0x58/0x90
Mar 24 18:30:40 proxmox kernel: ? exit_to_user_mode_prepare+0x39/0x190
Mar 24 18:30:40 proxmox kernel: ? irqentry_exit_to_user_mode+0x17/0x20
Mar 24 18:30:40 proxmox kernel: ? irqentry_exit+0x43/0x50
Mar 24 18:30:40 proxmox kernel: ? exc_page_fault+0x94/0x1b0
Mar 24 18:30:40 proxmox kernel: entry_SYSCALL_64_after_hwframe+0x6e/0xd8
Mar 24 18:30:40 proxmox kernel: RIP: 0033:0x7b7e4114f8f7
Mar 24 18:30:40 proxmox kernel: Code: 00 00 00 48 8b 15 09 05 0d 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 b8 0b 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d d9 04 0d 00 f7 d8 64 89 01 48
Mar 24 18:30:40 proxmox kernel: RSP: 002b:00007ffe9c87ca38 EFLAGS: 00000202 ORIG_RAX: 000000000000000b
Mar 24 18:30:40 proxmox kernel: RAX: ffffffffffffffda RBX: ffffffffffffff78 RCX: 00007b7e4114f8f7
Mar 24 18:30:40 proxmox kernel: RDX: 0000000000000000 RSI: 0000000000151000 RDI: 00007b7e3e6af000
Mar 24 18:30:40 proxmox kernel: RBP: 0000000000000016 R08: 0000000000151000 R09: 0000585b929b4180
Mar 24 18:30:40 proxmox kernel: R10: 606712f746acb496 R11: 0000000000000202 R12: 00007b7e41220820
Mar 24 18:30:40 proxmox kernel: R13: 0000585b929827a0 R14: 0000000000000151 R15: 00007b7e412222c8
Mar 24 18:30:40 proxmox kernel: </TASK>



Mar 24 18:31:08 proxmox kernel: watchdog: BUG: soft lockup - CPU#0 stuck for 52s! [CPU 2/KVM:958]
Mar 24 18:31:08 proxmox kernel: Modules linked in: udp_diag tcp_diag inet_diag cfg80211 veth ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter bpfilter nf_tables bonding tls sunrpc nfnetlink_log binfmt_misc nfnetlink intel_rapl_msr intel_rapl_common x86_pkg_temp_thermal intel_powerclamp snd_hda_codec_hdmi coretemp kvm_intel kvm snd_sof_pci_intel_tgl snd_sof_intel_hda_common soundwire_intel irqbypass crct10dif_pclmul polyval_clmulni polyval_generic snd_sof_intel_hda_mlink ghash_clmulni_intel sha256_ssse3 soundwire_cadence sha1_ssse3 aesni_intel snd_sof_intel_hda snd_sof_pci crypto_simd i915 cryptd snd_sof_xtensa_dsp snd_sof snd_sof_utils snd_soc_hdac_hda snd_hda_ext_core snd_soc_acpi_intel_match mei_pxp mei_hdcp snd_soc_acpi soundwire_generic_allocation soundwire_bus snd_soc_core snd_compress ac97_bus snd_pcm_dmaengine snd_hda_intel snd_intel_dspcfg snd_intel_sdw_acpi snd_hda_codec rapl snd_hda_core snd_hwdep drm_buddy ttm snd_pcm drm_display_helper snd_timer intel_cstate wmi_bmof pcspkr cec
Mar 24 18:31:08 proxmox kernel: snd cmdlinepart rc_core soundcore spi_nor mei_me drm_kms_helper mtd mei i2c_algo_bit acpi_tad acpi_pad mac_hid zfs(PO) spl(O) vhost_net vhost vhost_iotlb tap drm efi_pstore dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b_generic xor raid6_pq simplefb dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio libcrc32c nvme crc32_pclmul xhci_pci xhci_pci_renesas nvme_core spi_intel_pci i2c_i801 spi_intel i2c_smbus nvme_common igc xhci_hcd ahci libahci video wmi
 
I'm having similar issues, Topton N100 box, PVE 8.2.4. Seems to randomly hard lock up, no SSH responses or anything. Hard reset restores it but it's rough on some of my VMs and causes disk corruption. Any suggested diagnostics or tests I should run? I'm thinking memtest86 but am also hoping to configure the hardware watchdog which I _think_ this model has.
 
  • Like
Reactions: PatPend
I got another (3rd) identical N100 unit, this time specced with 8GB DDR5 and 128GB NVMe.

Reminder ouf of the 2 initial N100 units this is the spec for each unit:
Crucial P3 Plus 2000MB NVMe (ZFS)
Crucial BX500 SATA SSD (Boot drive, EXT4)
Crucial DDR5-4800 SODIMM 32GB

Of the 2 original ones one is crashing sporadically the other is 100% stable.
Swapped the components individually as well as all at once between the 2 units.
The issue does not migrate with the components but stays with 1 specific unit, this is why I presumed a faulty CPU or MoBo
Uptime is 19h-48h.

First thing I did was to take the Crucial NVMe, SATA and RAM from the unit that was crashing and put it in the brand new unit
Second thing I did was to put the factory 8GB DDR5 and 128GB NVMe into the old unit that was crashing.

Result:
The new unit has the exact same issue with the Crucial NVMe, SATA and RAM
The old unit runs perfectly fine with the factory components, no crashes whatsoever

Swapping the components between the new unit and the stable unit achieves nothing, the problem stays with the new unit
The crashes still yeild 0 logs, and still manifest the same: no ping, no video, no serial, no SSH nothing, only manual reboot helps

All 3 have the same BIOS version
The 2 units that are crashing were tested with PVE 7.4 with 5.15 kernel and they run fine, no crashes
The moment I upgrade to PVE 8.x with kernel 6.x irrelevant of the version (8.0/8.1/8.2 and 6.2/6.5/6.8 kernel) the crashes start happening
Also installing 6.x kernel on PVE 7.4 as an opt-in kernel starts to crash the unit after 19h-48h
For testing as suggested by other threads I also spun up Windows VMs on the ZFS drives and run Crystaldiskmark and checked for OOM/ZFS logs, there are none, this type of test does not influence the uptime or accelerate the crash.

EDIT: as recommended by this thread I also have active cooling for the units now
 
Last edited:
  • Like
Reactions: PatPend
My unit claims it has a hardware watchdog. I thought I had some success configuring it but it's just hard locked again - 2 weeks uptime. The timing is really suspicious coinciding with my scheduled backup job - but the HDD it's backing up to (over USB) has it's own PSU. So it shouldn't (?) be a current draw thing?
My fan has arrived anyhow so I'm going to try that. Switching backups to daily to try and trigger the issue.
 
Hello,
i have the same issues with the combination of the intel n100 and a 32gb crucial ddr5 module. I have installed lm-sensors and build in the temperatures into my proxmox ve. Im never over 50 Degrees. I bought the kingnovy mini pc from amazon, but it looks like the same as yours, see attachment. The only difference is, that they are selling it with an external fan.

Sometimes the pc runs 3-4 days, sometimes it crashes 2x a day.
 

Attachments

  • 7164eKQ2kpL._AC_SL1500_.jpg
    7164eKQ2kpL._AC_SL1500_.jpg
    128.9 KB · Views: 2
Hello,
I have the CWWK X86 P5 mini PC bought from Amazon. I added a 32 GB CORSAIR VENGEANCE SODIMM DDR5 RAM 32GB (1x32GB) 4800MHz CL40 Intel XMP iCUE Compatible memory stick.
I installed PROXMOX on it and noticed it was crashing every day. I tested the RAM memory with the Memtest86 tool and the test failed with a lot of errors. I guess memory is to blame because i replaced the 32GB DDR5 memory by a 8GB DDR5 memory and PROXMOX run perfectly since 15 days. Perhaps CWWK with N100 processor is not compatible with 32GB? What do you think?
 
Last edited:
Hello all. I also have an n100 proxmox system with frequent crashes. Used to crash every single day whenever any load was applied.
Since 4 days I have not had a crash after changing the ddr5 ram speed from 4800 to 4400 (fixed) in the bios.

Maybe other people can try this setting to see if it resolves some issues.
 
  • Like
Reactions: Fraenkieo
The uptime of both hosts is now, 50, yes 50 days.

I have not changed anything in the config as I was still investigating.

When I booted them up 50 days ago I was also doing stuff on my switch. Long story short, today I found out that I configured the wrong VLAN on the switch interface where my Raspberry Pi lies, which is also my corosync-qdevice. Therefore the cluster ran without the qdevice for 50 days without crashing.

I have now resolved the qdevice connectivity and am nervous to see if the crashes return.

Now I wonder how in the world the qdevice can make the host crash/freeze with no I/O, no IP Networking, no video nothing and no logs.

Can anyone enlighten me?

Status up until a few min ago:

Code:
Oct 09 18:23:46 pve1 corosync-qdevice[1324]: Connect timeout
Oct 09 18:23:46 pve1 corosync-qdevice[1324]: Algorithm result vote is NACK
Oct 09 18:23:46 pve1 corosync-qdevice[1324]: Cast vote timer remains scheduled every 5000ms voting NACK.
Oct 09 18:23:46 pve1 corosync-qdevice[1324]: Trying connect to qnetd server 10.24.7.9:5403 (timeout = 8000ms)
Oct 09 18:23:49 pve1 corosync-qdevice[1324]: Can't connect to qnetd host. (-5986): Network address not available (in use?)
Oct 09 18:23:54 pve1 corosync-qdevice[1324]: Connect timeout
Oct 09 18:23:54 pve1 corosync-qdevice[1324]: Algorithm result vote is NACK
Oct 09 18:23:54 pve1 corosync-qdevice[1324]: Cast vote timer remains scheduled every 5000ms voting NACK.
Oct 09 18:23:54 pve1 corosync-qdevice[1324]: Trying connect to qnetd server 10.24.7.9:5403 (timeout = 8000ms)
Oct 09 18:23:57 pve1 corosync-qdevice[1324]: Can't connect to qnetd host. (-5986): Network address not available (in use?)
Oct 09 18:24:02 pve1 corosync-qdevice[1324]: Connect timeout
Oct 09 18:24:02 pve1 corosync-qdevice[1324]: Algorithm result vote is NACK
Oct 09 18:24:02 pve1 corosync-qdevice[1324]: Cast vote timer remains scheduled every 5000ms voting NACK.
Oct 09 18:24:02 pve1 corosync-qdevice[1324]: Trying connect to qnetd server 10.24.7.9:5403 (timeout = 8000ms)
Oct 09 18:24:05 pve1 corosync-qdevice[1324]: Can't connect to qnetd host. (-5986): Network address not available (in use?)
Oct 09 18:24:10 pve1 corosync-qdevice[1324]: Connect timeout
Oct 09 18:24:10 pve1 corosync-qdevice[1324]: Algorithm result vote is NACK
Oct 09 18:24:10 pve1 corosync-qdevice[1324]: Cast vote timer remains scheduled every 5000ms voting NACK.
Oct 09 18:24:10 pve1 corosync-qdevice[1324]: Trying connect to qnetd server 10.24.7.9:5403 (timeout = 8000ms)


Current uptime

Code:
root@pve1:~# uptime
 18:18:26 up 50 days, 21:44,  2 users,  load average: 0.12, 0.18, 0.16
root@pve1:~# ssh root@10.24.7.6
Linux pve2 6.8.8-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.8-2 (2024-06-24T09:00Z) x86_64


Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Oct 11 16:19:33 2024 from 172.18.161.5
root@pve2:~# uptime
 18:18:37 up 50 days, 21:45,  1 user,  load average: 1.39, 1.34, 1.24

Pvecm status as of today, counting down towards the hopefully never again happening crash

Code:
root@pve1:~# pvecm status
Cluster information
-------------------
Name:             HomeLab
Config Version:   3
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Fri Oct 11 18:25:25 2024
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          0x00000001
Ring ID:          1.1b4
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2
Flags:            Quorate Qdevice

Membership information
----------------------
    Nodeid      Votes    Qdevice Name
0x00000001          1    A,V,NMW 172.18.161.5 (local)
0x00000002          1    A,V,NMW 172.18.161.6
0x00000000          1            Qdevice
 
Last edited:
After I purchased CWWK's N305 model and installed WIN11 in the PVE, it randomly freezes. After reporting and mutual troubleshooting with the CWWK manufacturer, it was confirmed that it was a problem with the power supply. The reason is that China's standard power supply is 220V, and the power supply they provide will have insufficient power in countries where the standard power supply is 110V. If you install the WIN11 system directly, the problem will be a blue screen and then reboot. The solution is to replace the 110V power supply with sufficient wattage. I know it sounds incredible on a low power CPU, but it happens.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!