Help identify why server keeps having odd issues

DerpyFox

New Member
Apr 3, 2023
16
0
1
Code:
Jan 24 18:43:05 pve smartd[1397]: Device: /dev/sdb [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 76 to 77
Jan 24 18:43:05 pve smartd[1397]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.CT4000MX500SSD1-2305E6A3F28F.ata.state
Jan 24 18:43:05 pve smartd[1397]: Device: /dev/sdb [SAT], state written to /var/lib/smartmontools/smartd.CT4000MX500SSD1-2304E6A3D347.ata.state
Jan 24 18:43:05 pve smartd[1397]: Device: /dev/sdc [SAT], state written to /var/lib/smartmontools/smartd.CT4000MX500SSD1-2305E6A3F3E9.ata.state
Jan 24 18:43:05 pve smartd[1397]: Device: /dev/sdd [SAT], state written to /var/lib/smartmontools/smartd.CT4000MX500SSD1-2304E6A29619.ata.state
Jan 24 18:43:05 pve smartd[1397]: Device: /dev/nvme0, state written to /var/lib/smartmontools/smartd.WD_BLACK_SN850_Heatsink_500GB-211002800094.nvme.state
Jan 24 18:43:05 pve systemd[1]: Started smartmontools.service - Self Monitoring and Reporting Technology (SMART) Daemon.
Jan 24 18:43:06 pve systemd[1]: Finished pvebanner.service - Proxmox VE Login Banner.
Jan 24 18:43:06 pve kernel: vmbr0: port 1(enp5s0) entered blocking state
Jan 24 18:43:06 pve kernel: vmbr0: port 1(enp5s0) entered disabled state
Jan 24 18:43:06 pve kernel: r8169 0000:05:00.0 enp5s0: entered allmulticast mode
Jan 24 18:43:06 pve kernel: r8169 0000:05:00.0 enp5s0: entered promiscuous mode
Jan 24 18:43:06 pve kernel: Generic FE-GE Realtek PHY r8169-0-500:00: attached PHY driver (mii_bus:phy_addr=r8169-0-500:00, irq=MAC)
Jan 24 18:43:06 pve kernel: r8169 0000:05:00.0 enp5s0: Link is Down
Jan 24 18:43:06 pve kernel: vmbr0: port 1(enp5s0) entered blocking state
Jan 24 18:43:06 pve kernel: vmbr0: port 1(enp5s0) entered forwarding state
Jan 24 18:43:06 pve systemd[1]: Finished networking.service - Network initialization.
Jan 24 18:43:06 pve systemd[1]: Reached target network.target - Network.
Jan 24 18:43:06 pve systemd[1]: Reached target network-online.target - Network is Online.
Jan 24 18:43:06 pve systemd[1]: Starting chrony.service - chrony, an NTP client/server...
Jan 24 18:43:06 pve systemd[1]: Started lxc-monitord.service - LXC Container Monitoring Daemon.
Jan 24 18:43:06 pve systemd[1]: Starting lxc-net.service - LXC network bridge setup...
Jan 24 18:43:06 pve systemd[1]: open-iscsi.service - Login to default iSCSI targets was skipped because no trigger condition checks were met.
Jan 24 18:43:06 pve systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 24 18:43:06 pve systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 24 18:43:06 pve systemd[1]: Reached target pve-storage.target - PVE Storage Target.
Jan 24 18:43:06 pve systemd[1]: Starting postfix@-.service - Postfix Mail Transport Agent (instance -)...
Jan 24 18:43:06 pve systemd[1]: Starting rbdmap.service - Map RBD devices...
Jan 24 18:43:06 pve systemd[1]: Starting rpc-statd-notify.service - Notify NFS peers of a restart...
Jan 24 18:43:06 pve systemd[1]: Starting ssh.service - OpenBSD Secure Shell server...
Jan 24 18:43:06 pve systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 24 18:43:06 pve sm-notify[1585]: Version 2.6.2 starting
Jan 24 18:43:06 pve systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Jan 24 18:43:06 pve systemd[1]: Finished rbdmap.service - Map RBD devices.
Jan 24 18:43:06 pve systemd[1]: Started rpc-statd-notify.service - Notify NFS peers of a restart.
Jan 24 18:43:06 pve systemd[1]: Finished blk-availability.service - Availability of block devices.
Jan 24 18:43:06 pve systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Jan 24 18:43:06 pve systemd[1]: Started getty@tty1.service - Getty on tty1.
Jan 24 18:43:06 pve systemd[1]: Reached target getty.target - Login Prompts.
Jan 24 18:43:06 pve systemd[1]: Finished lxc-net.service - LXC network bridge setup.
Jan 24 18:43:06 pve systemd[1]: Starting lxc.service - LXC Container Initialization and Autoboot Code...
Jan 24 18:43:06 pve sshd[1594]: Server listening on 0.0.0.0 port 22.
Jan 24 18:43:06 pve sshd[1594]: Server listening on :: port 22.
Jan 24 18:43:06 pve systemd[1]: Started ssh.service - OpenBSD Secure Shell server.
Jan 24 18:43:06 pve audit[1604]: AVC apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/bin/lxc-copy" pid=1604 comm="apparmor_parser"
Jan 24 18:43:06 pve audit[1609]: AVC apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/bin/lxc-start" pid=1609 comm="apparmor_parser"
Jan 24 18:43:06 pve chronyd[1617]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG)
Jan 24 18:43:06 pve chronyd[1617]: Frequency -23.873 +/- 0.123 ppm read from /var/lib/chrony/chrony.drift
Jan 24 18:43:06 pve chronyd[1617]: Using right/UTC timezone to obtain leap second data
Jan 24 18:43:06 pve audit[1615]: AVC apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default" pid=1615 comm="apparmor_parser"
Jan 24 18:43:06 pve audit[1615]: AVC apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default-cgns" pid=1615 comm="apparmor_parser"
Jan 24 18:43:06 pve audit[1615]: AVC apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default-with-mounting" pid=1615 comm="apparmor_parser"
Jan 24 18:43:06 pve audit[1615]: AVC apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxc-container-default-with-nesting" pid=1615 comm="apparmor_parser"
Jan 24 18:43:06 pve chronyd[1617]: Loaded seccomp filter (level 1)
Jan 24 18:43:06 pve systemd[1]: Started chrony.service - chrony, an NTP client/server.
Jan 24 18:43:06 pve systemd[1]: Reached target time-sync.target - System Time Synchronized.
Jan 24 18:43:06 pve systemd[1]: Started apt-daily.timer - Daily apt download activities.
Jan 24 18:43:06 pve systemd[1]: Started apt-daily-upgrade.timer - Daily apt upgrade and clean activities.
Jan 24 18:43:06 pve systemd[1]: Started dpkg-db-backup.timer - Daily dpkg database backup timer.
Jan 24 18:43:06 pve systemd[1]: Started e2scrub_all.timer - Periodic ext4 Online Metadata Check for All Filesystems.
Jan 24 18:43:06 pve systemd[1]: Started fstrim.timer - Discard unused blocks once a week.
Jan 24 18:43:06 pve systemd[1]: Started logrotate.timer - Daily rotation of log files.
Jan 24 18:43:06 pve systemd[1]: Started man-db.timer - Daily man-db regeneration.
Jan 24 18:43:06 pve systemd[1]: Started pve-daily-update.timer - Daily PVE download activities.
Jan 24 18:43:06 pve systemd[1]: Reached target timers.target - Timer Units.
Jan 24 18:43:06 pve systemd[1]: Starting rrdcached.service - LSB: start or stop rrdcached...
Jan 24 18:43:06 pve systemd[1]: Finished lxc.service - LXC Container Initialization and Autoboot Code.
Jan 24 18:43:06 pve rrdcached[1630]: rrdcached started.
Jan 24 18:43:06 pve systemd[1]: Started rrdcached.service - LSB: start or stop rrdcached.
Jan 24 18:43:06 pve systemd[1]: Starting pve-cluster.service - The Proxmox VE cluster filesystem...
Jan 24 18:43:06 pve postfix[1675]: Postfix is using backwards-compatible default settings
Jan 24 18:43:06 pve postfix[1675]: See http://www.postfix.org/COMPATIBILITY_README.html for details
Jan 24 18:43:06 pve postfix[1675]: To disable backwards compatibility use "postconf compatibility_level=3.6" and "postfix reload"
Jan 24 18:43:06 pve pmxcfs[1674]: [main] notice: resolved node name 'pve' to '10.0.0.5' for default node IP address
Jan 24 18:43:06 pve pmxcfs[1674]: [main] notice: resolved node name 'pve' to '10.0.0.5' for default node IP address
Jan 24 18:43:07 pve postfix/postfix-script[1771]: starting the Postfix mail system
Jan 24 18:43:07 pve postfix/master[1773]: daemon started -- version 3.7.11, configuration /etc/postfix
Jan 24 18:43:07 pve systemd[1]: Started postfix@-.service - Postfix Mail Transport Agent (instance -).
Jan 24 18:43:07 pve systemd[1]: Starting postfix.service - Postfix Mail Transport Agent...
Jan 24 18:43:07 pve systemd[1]: Finished postfix.service - Postfix Mail Transport Agent.
Jan 24 18:43:07 pve kernel: vmbr0: port 1(enp5s0) entered disabled state
Jan 24 18:43:07 pve systemd[1]: Started pve-cluster.service - The Proxmox VE cluster filesystem.
Jan 24 18:43:07 pve systemd[1]: corosync.service - Corosync Cluster Engine was skipped because of an unmet condition check (ConditionPathExists=/etc/corosync/corosync.conf).
Jan 24 18:43:07 pve systemd[1]: Started cron.service - Regular background program processing daemon.
Jan 24 18:43:07 pve systemd[1]: Started proxmox-firewall.service - Proxmox nftables firewall.
Jan 24 18:43:07 pve cron[1780]: (CRON) INFO (pidfile fd = 3)
Jan 24 18:43:07 pve systemd[1]: Starting pve-firewall.service - Proxmox VE firewall...
Jan 24 18:43:07 pve cron[1780]: (CRON) INFO (Running @reboot jobs)
Jan 24 18:43:07 pve systemd[1]: Starting pvedaemon.service - PVE API Daemon...
Jan 24 18:43:07 pve systemd[1]: Starting pvestatd.service - PVE Status Daemon...
Jan 24 18:43:08 pve pve-firewall[1791]: starting server
Jan 24 18:43:08 pve systemd[1]: Started pve-firewall.service - Proxmox VE firewall.
Jan 24 18:43:08 pve pvestatd[1798]: starting server
Jan 24 18:43:08 pve systemd[1]: Started pvestatd.service - PVE Status Daemon.
Jan 24 18:43:08 pve pvedaemon[1818]: starting server
Jan 24 18:43:08 pve pvedaemon[1818]: starting 3 worker(s)
Jan 24 18:43:08 pve pvedaemon[1818]: worker 1819 started
Jan 24 18:43:08 pve pvedaemon[1818]: worker 1820 started
Jan 24 18:43:08 pve pvedaemon[1818]: worker 1821 started
Jan 24 18:43:08 pve systemd[1]: Started pvedaemon.service - PVE API Daemon.
Jan 24 18:43:08 pve systemd[1]: Starting pve-ha-crm.service - PVE Cluster HA Resource Manager Daemon...
Jan 24 18:43:08 pve systemd[1]: Starting pveproxy.service - PVE API Proxy Server...
Jan 24 18:43:08 pve kernel: r8169 0000:05:00.0 enp5s0: Link is Up - 1Gbps/Full - flow control off
Jan 24 18:43:08 pve kernel: vmbr0: port 1(enp5s0) entered blocking state
Jan 24 18:43:08 pve kernel: vmbr0: port 1(enp5s0) entered forwarding state
Jan 24 18:43:08 pve pve-ha-crm[1827]: starting server
Jan 24 18:43:08 pve pve-ha-crm[1827]: status change startup => wait_for_quorum
Jan 24 18:43:09 pve systemd[1]: Started pve-ha-crm.service - PVE Cluster HA Resource Manager Daemon.
Jan 24 18:43:09 pve pveproxy[1828]: starting server
Jan 24 18:43:09 pve pveproxy[1828]: starting 3 worker(s)
Jan 24 18:43:09 pve pveproxy[1828]: worker 1829 started
Jan 24 18:43:09 pve pveproxy[1828]: worker 1830 started
Jan 24 18:43:09 pve pveproxy[1828]: worker 1831 started
Jan 24 18:43:09 pve systemd[1]: Started pveproxy.service - PVE API Proxy Server.
Jan 24 18:43:09 pve systemd[1]: Starting pve-ha-lrm.service - PVE Local HA Resource Manager Daemon...
Jan 24 18:43:09 pve systemd[1]: Starting spiceproxy.service - PVE SPICE Proxy Server...
Jan 24 18:43:09 pve spiceproxy[1834]: starting server
Jan 24 18:43:09 pve spiceproxy[1834]: starting 1 worker(s)
Jan 24 18:43:09 pve spiceproxy[1834]: worker 1835 started
Jan 24 18:43:09 pve systemd[1]: Started spiceproxy.service - PVE SPICE Proxy Server.
Jan 24 18:43:09 pve pve-ha-lrm[1839]: starting server
Jan 24 18:43:09 pve pve-ha-lrm[1839]: status change startup => wait_for_agent_lock
Jan 24 18:43:09 pve systemd[1]: Started pve-ha-lrm.service - PVE Local HA Resource Manager Daemon.
Jan 24 18:43:09 pve systemd[1]: Starting pve-guests.service - PVE guests...
Jan 24 18:43:10 pve pve-guests[1841]: <root@pam> starting task UPID:pve:00000732:000005B7:6794258E:startall::root@pam:
Jan 24 18:43:10 pve pvesh[1841]: Starting VM 100
Jan 24 18:43:10 pve pve-guests[1842]: <root@pam> starting task UPID:pve:00000733:000005B8:6794258E:qmstart:100:root@pam:
Jan 24 18:43:10 pve pve-guests[1843]: start VM 100: UPID:pve:00000733:000005B8:6794258E:qmstart:100:root@pam:
Jan 24 18:43:10 pve systemd[1]: Created slice qemu.slice - Slice /qemu.
Jan 24 18:43:10 pve systemd[1]: Started 100.scope.
Jan 24 18:43:11 pve kernel: tap100i0: entered promiscuous mode
Jan 24 18:43:11 pve kernel: vmbr0: port 2(tap100i0) entered blocking state
Jan 24 18:43:11 pve kernel: vmbr0: port 2(tap100i0) entered disabled state
Jan 24 18:43:11 pve kernel: tap100i0: entered allmulticast mode
Jan 24 18:43:11 pve kernel: vmbr0: port 2(tap100i0) entered blocking state
Jan 24 18:43:11 pve kernel: vmbr0: port 2(tap100i0) entered forwarding state
Jan 24 18:43:11 pve pve-guests[1843]: VM 100 started with PID 1853.
Jan 24 18:43:14 pve pve-guests[1841]: <root@pam> end task UPID:pve:00000732:000005B7:6794258E:startall::root@pam: OK
Jan 24 18:43:14 pve systemd[1]: Finished pve-guests.service - PVE guests.
Jan 24 18:43:14 pve systemd[1]: Starting pvescheduler.service - Proxmox VE scheduler...
Jan 24 18:43:14 pve pvescheduler[1893]: starting server
Jan 24 18:43:14 pve systemd[1]: Started pvescheduler.service - Proxmox VE scheduler.
Jan 24 18:43:14 pve systemd[1]: Reached target multi-user.target - Multi-User System.
Jan 24 18:43:14 pve systemd[1]: Reached target graphical.target - Graphical Interface.
Jan 24 18:43:15 pve systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP...
Jan 24 18:43:15 pve systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 24 18:43:15 pve systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP.
Jan 24 18:43:15 pve systemd[1]: Startup finished in 17.317s (firmware) + 5.835s (loader) + 7.607s (kernel) + 11.566s (userspace) = 42.327s.
Jan 24 18:43:15 pve pveproxy[1831]: proxy detected vanished client connection
Jan 24 18:43:16 pve pvedaemon[1821]: <root@pam> successful auth for user 'root@pam'
Jan 24 18:43:16 pve chronyd[1617]: Selected source 173.71.68.71 (2.debian.pool.ntp.org)
Jan 24 18:43:16 pve chronyd[1617]: System clock TAI offset set to 37 seconds
Jan 24 18:43:17 pve kernel: kvm: kvm [1853]: ignored rdmsr: 0x3a data 0x0
Jan 24 18:43:17 pve kernel: kvm: kvm [1853]: ignored rdmsr: 0xd90 data 0x0
Jan 24 18:43:17 pve kernel: kvm: kvm [1853]: ignored rdmsr: 0x122 data 0x0
Jan 24 18:43:17 pve kernel: kvm: kvm [1853]: ignored rdmsr: 0x570 data 0x0
Jan 24 18:43:17 pve kernel: kvm: kvm [1853]: ignored rdmsr: 0x571 data 0x0
Jan 24 18:43:17 pve kernel: kvm: kvm [1853]: ignored rdmsr: 0x572 data 0x0
Jan 24 18:43:17 pve kernel: kvm: kvm [1853]: ignored rdmsr: 0x560 data 0x0
Jan 24 18:43:17 pve kernel: kvm: kvm [1853]: ignored rdmsr: 0x561 data 0x0
Jan 24 18:43:17 pve kernel: kvm: kvm [1853]: ignored rdmsr: 0x580 data 0x0
Jan 24 18:43:17 pve kernel: kvm: kvm [1853]: ignored rdmsr: 0x581 data 0x0


This is the system log I am getting. I have VMs randomly crashing. Sometimes, the entire hypervisor decides to stop the NIC entirely. It might be doing something worse, but I didn't have the monitor connected to see it.
 
Hopefully this will tell you a lot... I ran memory test and when it got to test 2 it immediately crashes. Every single time.
 
Last edited:
Hopefully this will tell you a lot... I ran memory test and when it got to test 2 it immediately crashes. Every single time.
Either memory is bad (crashing CPU) or the motherboard or the CPU. Test your memory DIMMs one at a time by physically removing the others (but make sure to use the right memory slot, depends on the motherboard, see the manual). If they appear all bad then it could be the memory slot. Test each module in one of the other slots.
This is not Proxmox (or even Linux) specific and there are guides on PC hardware troubleshooting on the internet.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!