Recent content by robynhub

  1. R

    Packages kept back on upgrade to 6.0 with glusterfs official repo

    Hello, Thank you for your answer. I have this setup: 4 X NAS with Debian 10 and Gluster 6.4 as servers 10 X ProxmoxVE Nodes Enterprise 6.0-2 10 Gbit networking. about 350 VMs Before upgrading to PVE 6.0 I had all nodes upgraded to Gluster 6.4 and the Gluster op.version of every share is...
  2. R

    Packages kept back on upgrade to 6.0 with glusterfs official repo

    Downgrading to gluster 5.5 solved the issue but now I can't access to my repo. I really NEED to upgrade glusterfs to 6.4 with proxmox 6. With the PVE 5.3 aren't any issues. I just followed the official Glusterfs install guide for Debian buster. # aptitude dist-upgrade The following packages...
  3. R

    Packages kept back on upgrade to 6.0 with glusterfs official repo

    Hello, after upgrading i have some packages kept back: ~# apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages have been kept back: pve-qemu-kvm qemu-server spiceterm 0 upgraded, 0...
  4. R

    KVM Segfault latest community repo on libpthread

    I've managed to upgrade the cluster without any downtime to version 4.0.2-2. I will let you know if other crashes happen. If I don't see any crash for a week, I will mark this thread as solved. Thank you for your support.
  5. R

    KVM Segfault latest community repo on libpthread

    It's a separate cluster. In detail we have: -- Gluster Cluster -- 4 Nodes each with: Debian stretch - 32 Gb Ram - Xeon E5-1620 - Dual 10Gbe Nic (bonding lacp) - Areca RAID Controller with 24 Disks ( 16 SAS 10K and 8 SATA ) -- Proxmox Cluster -- 8 Nodes each with: Proxmox VE 5.1 (latest)...
  6. R

    KVM Segfault latest community repo on libpthread

    I've only used the version shipped with the latest Proxmox-ve ISO. Do you think that i could upgrade only the clients instead the whole GlusterCluster? The Cluster is in production and there are ~200 VMs that are running over it. I can try to install the new gluster client over a single node and...
  7. R

    KVM Segfault latest community repo on libpthread

    I think so. In this cluster every VM have the HD over a gluster storage. This latest crash regards a VM that usually make a lot of IO but I don't know if this could be related.
  8. R

    KVM Segfault latest community repo on libpthread

    Here there is the coredumpctl info: How this could help?
  9. R

    KVM Segfault latest community repo on libpthread

    Hello fabian, thank you for the answer. The debsums didn't found any error or md5 mismatch. I've installed the pve-qemu-kvm-dbg and systemd-coredump packages. Do you have any guide or documentation how to use those packages? Thank you in advance.
  10. R

    KVM Segfault latest community repo on libpthread

    Hello, lately we're experiencing many segfaults across multiple physical machines (Dell R620) that sporadically lead to crash some VM: [433025.858682] kvm[3158]: segfault at 18 ip 00007feee18b8c70 sp 00007feece5e3e38 error 6 in libpthread-2.24.so[7feee18ab000+18000] We're using the latest...
  11. R

    BUG: GFS2 Issue Kernel Panic when delete on a new fresh filesystem.

    It's possible to have a more updated version of gfs-utils? The version 3.1.3 it's full of bugs! I saw many errors in the locking protocol when trying to double mount the filesystem (lock the whole cluster!!!). Maybe i can downgrade to 3.1.0. I need just to figure out how... Ty for the answer.
  12. R

    BUG: GFS2 Issue Kernel Panic when delete on a new fresh filesystem.

    I've tryed again with a real device (i've used a 32 Gb usb pen) and the issue isn't shown. But the zeroed file is still there. I've delayed the printk messages and this is the panic: Seems solved with a real device. I've already mounted the production fs without issues. I'm still a bit scared..
  13. R

    BUG: GFS2 Issue Kernel Panic when delete on a new fresh filesystem.

    Hey, I dont want to be offensive!!! STFU meaning is "stay tuned for updates"!!! :)