segfault

  1. U

    PveDaemon SegFault / VNCProxy

    Original message here https://forum.proxmox.com/threads/proxmox-ve-8-3-released.157793/post-724494 - working on vm 201: Nov 27 15:20:15 pve pvedaemon[59586]: worker exit Nov 27 15:20:15 pve pvedaemon[4943]: worker 59586 finished Nov 27 15:20:15 pve pvedaemon[4943]: starting 1 worker(s) Nov 27...
  2. U

    All LCX containers stopped but VMs and Node still running

    I noticed this morning that all my containers were stopped (I have 8 of the ruuning). The node itself was running and so was the VM that I have on it as well. I was able to notice this in the logs. I can't seem to find any documentation as to what result 'signal' is, and is there a way to have...
  3. G

    Random segfaults within VM but memory looks good ?

    I'm getting about 1-2 random segfaults per week in all kind of processes. It happens in different qemu VM's, hasn't been on the proxmox host itself or in LXC containers yet. I did run Memtest multiple times now in total for over 24hours and had 0 errors. It's also kinda weird since it's a...
  4. VictorSTS

    Kernel segfault on host while using spice display in Linux VM

    Hello, I'm having a serious issue with a couple of Linux VMs (Ubuntu 20.04 Desktop, Linux Mint 20.1). They both have spice display: agent: 1,fstrim_cloned_disks=1 audio0: device=ich9-intel-hda,driver=spice boot: order=scsi0;ide2 cores: 4 cpu...
  5. S

    proxmox VE 7.1-11 free segfaults with excessive io delay

    Hello, i have a proxmox VE 7.1-11 with lcx containers and experiencing the following condition. Sometimes the server seems to have excessive io delay and the containers become inaccessible with the following messages in the system log. Jun 2 04:35:04 proms kernel: [4423712.381603]...
  6. V

    Segfault errors in log

    Hello, I just noticed that dmesg shows several segfault errors in the log on our HP DL360p G8. Is this anything we should worry about? Thanks! [1878477.675250] show_signal_msg: 6 callbacks suppressed [1878477.675252] PLUGIN[diskspac[32021]: segfault at 18 ip 00007f14c6f50116 sp...
  7. C

    segfault and systemctl timeout

    My server has 20days of uptime and suddenly it starts to output these errors... May 13 14:38:02 pve kernel: [1821444.558571] pvesr[10643]: segfault at 45a8bdf5ad18 ip 000055a8bb86d53d sp 00007ffe11ddfce0 error 4 in perl[55a8bb7b3000+15d000] May 13 14:38:02 pve kernel: [1821444.558592] Code: f6...
  8. ThinkAgain

    [SOLVED] PVE 6.1: ZFS Segfault upon system boot

    Hi, upon booting the system (only during boot so far), I get a segfault in ZFS: [ 17.905941] ZFS: Loaded module v0.8.3-pve1, ZFS pool version 5000, ZFS filesystem version 5 [...][ 20.804544] zfs[4387]: segfault at 0 ip 00007f565ddde694 sp 00007f5656ff5420 error 4 [ 20.804546] zfs[4379]...
  9. A

    PVE6.0-5: Corosync3 segvaults randomly on nodes

    Hey guys, we updated from PVE5 to PVE6 recently and noticed that nodes on our 4-node cluster leave randomly. Checking pvecm status states that CMAP cannot be initialized, so I had a look at corosync on the failed node only to learn that it obviously segfaulted. This happened on 3 of 4 cluster...
  10. T

    segfault on all cluster nodes

    Hi forum, we run a six node pve/ceph cluster with two corosync rings. Yesterday we have a segfault on all nodes a the same time. Every node make a reboot. Feb 25 13:16:01 node1 systemd[1]: Started Proxmox VE replication runner. Feb 25 13:16:01 node1 pve-ha-crm[4427]: service 'vm:3064' without...
  11. fstrankowski

    Absturz ganzes Proxmox-Cluster mit 12 Nodes / Segfault cfs_loop

    Hallo, wir haben am Wochenende einen massiven Absturz eines unserer Proxmox-Cluster erlebt. Von jetzt auf gleich ist ein ganzes Cluster abgestürzt, zeitgleich. Hier der Auszug aus der messages: Feb 24 07:25:59 PX20-WW-SN06 kernel: [1448261.497103] cfs_loop[12091]: segfault at 7fbb0bd266ac ip...
  12. G

    Latest PVE KVM build causes glusterfs server crash.

    Running a glusterfs 3.12 server (redhat, latest stable version). I have a volume set up that a whole cluster of updated PVE 5.1 hosts are on. Any attempt to use qemu-img create a qcow2 image will immediately crash the gluster server and bring the volume offline. Also any attempt to connect to an...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!