Hi, new to the forum and new to proxmox here but longtime VMWare user.
I purchased 2 Beelink U59 Celeron computers to replace my aging 12 year old VMware server and installed Proxmox on them. I was able to migrate over all my VMs but I keep running into issues with both of them.
I keep getting random reboots of VMs, VMs hanging and also Proxmox rebooting itself. I can't find anything in the kernel logs or syslogs that would indicate why they crashed and rebooted.
The computers have:
- https://www.amazon.ca/gp/product/B09J4D6TMG
- Intel N5095 Celeron CPU
- 16gb DDR4 2933 MHz Memory
- 512gb M.2 SSD
- Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
- AMI Bios
- 12v 3A Power Supply (came with it)
I have tried tweaking some settings in the BIOS on one of them such as disabling Turbo mode and so far so good, not sure if this is really the culprit.
I read some posts suggesting to try using memtest86 however I get a black screen when trying to boot with it.
Does anyone have any suggestions for me to try? It is getting really annoying having my VMs and hypervisors reboot multiple times per day...
I purchased 2 Beelink U59 Celeron computers to replace my aging 12 year old VMware server and installed Proxmox on them. I was able to migrate over all my VMs but I keep running into issues with both of them.
I keep getting random reboots of VMs, VMs hanging and also Proxmox rebooting itself. I can't find anything in the kernel logs or syslogs that would indicate why they crashed and rebooted.
The computers have:
- https://www.amazon.ca/gp/product/B09J4D6TMG
- Intel N5095 Celeron CPU
- 16gb DDR4 2933 MHz Memory
- 512gb M.2 SSD
- Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
- AMI Bios
- 12v 3A Power Supply (came with it)
I have tried tweaking some settings in the BIOS on one of them such as disabling Turbo mode and so far so good, not sure if this is really the culprit.
I read some posts suggesting to try using memtest86 however I get a black screen when trying to boot with it.
Does anyone have any suggestions for me to try? It is getting really annoying having my VMs and hypervisors reboot multiple times per day...
Code:
root@pve1:/var/log# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 156
Model name: Intel(R) Celeron(R) N5095 @ 2.00GHz
Stepping: 0
CPU MHz: 2000.000
CPU max MHz: 2000.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Virtualization: VT-x
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 1.5 MiB
L3 cache: 4 MiB
NUMA node0 CPU(s): 0-3
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT disabled
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Vulnerable: No microcode
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss
ht tm pbe syscall nx rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpui
d aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm sse4_1 s
se4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch cpuid_fault epb cat_l2 cdp
_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust smep e
rms rdt_a rdseed smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm
arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip waitpkg gfni rdpid movdiri movdir64b md_cl
ear flush_l1d arch_capabilities
Code:
root@pve1:/var/log# pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.15.39-1-pve)
pve-manager: 7.2-7 (running version: 7.2-7/d0dd0e85)
pve-kernel-5.15: 7.2-6
pve-kernel-helper: 7.2-6
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.15.35-3-pve: 5.15.35-6
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph: 16.2.9-pve1
ceph-fuse: 16.2.9-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-3
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-3
libpve-storage-perl: 7.2-5
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.3-1
proxmox-backup-file-restore: 2.2.3-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-2
pve-ha-manager: 3.3-4
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-11
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1