Search results

  1. R

    Packages kept back on upgrade to 6.0 with glusterfs official repo

    Hello, Thank you for your answer. I have this setup: 4 X NAS with Debian 10 and Gluster 6.4 as servers 10 X ProxmoxVE Nodes Enterprise 6.0-2 10 Gbit networking. about 350 VMs Before upgrading to PVE 6.0 I had all nodes upgraded to Gluster 6.4 and the Gluster op.version of every share is...
  2. R

    Packages kept back on upgrade to 6.0 with glusterfs official repo

    Downgrading to gluster 5.5 solved the issue but now I can't access to my repo. I really NEED to upgrade glusterfs to 6.4 with proxmox 6. With the PVE 5.3 aren't any issues. I just followed the official Glusterfs install guide for Debian buster. # aptitude dist-upgrade The following packages...
  3. R

    Packages kept back on upgrade to 6.0 with glusterfs official repo

    Hello, after upgrading i have some packages kept back: ~# apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages have been kept back: pve-qemu-kvm qemu-server spiceterm 0 upgraded, 0...
  4. R

    KVM Segfault latest community repo on libpthread

    I've managed to upgrade the cluster without any downtime to version 4.0.2-2. I will let you know if other crashes happen. If I don't see any crash for a week, I will mark this thread as solved. Thank you for your support.
  5. R

    KVM Segfault latest community repo on libpthread

    It's a separate cluster. In detail we have: -- Gluster Cluster -- 4 Nodes each with: Debian stretch - 32 Gb Ram - Xeon E5-1620 - Dual 10Gbe Nic (bonding lacp) - Areca RAID Controller with 24 Disks ( 16 SAS 10K and 8 SATA ) -- Proxmox Cluster -- 8 Nodes each with: Proxmox VE 5.1 (latest)...
  6. R

    KVM Segfault latest community repo on libpthread

    I've only used the version shipped with the latest Proxmox-ve ISO. Do you think that i could upgrade only the clients instead the whole GlusterCluster? The Cluster is in production and there are ~200 VMs that are running over it. I can try to install the new gluster client over a single node and...
  7. R

    KVM Segfault latest community repo on libpthread

    I think so. In this cluster every VM have the HD over a gluster storage. This latest crash regards a VM that usually make a lot of IO but I don't know if this could be related.
  8. R

    KVM Segfault latest community repo on libpthread

    Here there is the coredumpctl info: How this could help?
  9. R

    KVM Segfault latest community repo on libpthread

    Hello fabian, thank you for the answer. The debsums didn't found any error or md5 mismatch. I've installed the pve-qemu-kvm-dbg and systemd-coredump packages. Do you have any guide or documentation how to use those packages? Thank you in advance.
  10. R

    KVM Segfault latest community repo on libpthread

    Hello, lately we're experiencing many segfaults across multiple physical machines (Dell R620) that sporadically lead to crash some VM: [433025.858682] kvm[3158]: segfault at 18 ip 00007feee18b8c70 sp 00007feece5e3e38 error 6 in libpthread-2.24.so[7feee18ab000+18000] We're using the latest...
  11. R

    BUG: GFS2 Issue Kernel Panic when delete on a new fresh filesystem.

    It's possible to have a more updated version of gfs-utils? The version 3.1.3 it's full of bugs! I saw many errors in the locking protocol when trying to double mount the filesystem (lock the whole cluster!!!). Maybe i can downgrade to 3.1.0. I need just to figure out how... Ty for the answer.
  12. R

    BUG: GFS2 Issue Kernel Panic when delete on a new fresh filesystem.

    I've tryed again with a real device (i've used a 32 Gb usb pen) and the issue isn't shown. But the zeroed file is still there. I've delayed the printk messages and this is the panic: Seems solved with a real device. I've already mounted the production fs without issues. I'm still a bit scared..
  13. R

    BUG: GFS2 Issue Kernel Panic when delete on a new fresh filesystem.

    Hey, I dont want to be offensive!!! STFU meaning is "stay tuned for updates"!!! :)
  14. R

    BUG: GFS2 Issue Kernel Panic when delete on a new fresh filesystem.

    I don't know what to say. I will try to figure out what wrong. And i will try with a real device... STFU.
  15. R

    BUG: GFS2 Issue Kernel Panic when delete on a new fresh filesystem.

    Of course I've rebooted. But the kernel you tested is'nt the same: 3.10.0-9-pve != 3.10.0-11-pve The device isn't a real block device. It's just a zero filled file as you done in your tests.
  16. R

    BUG: GFS2 Issue Kernel Panic when delete on a new fresh filesystem.

    Hello again, i've tryed to upgrade the kernel and, without mounting the production fs, i've tryed to create a local gfs2 filesystem to test the issue: root@VMFO07:~# uname -a Linux VMFO07 3.10.0-11-pve #1 SMP Tue Jul 21 08:59:46 CEST 2015 x86_64 GNU/Linux root@VMFO07:~# dd if=/dev/zero...
  17. R

    BUG: GFS2 Issue Kernel Panic when delete on a new fresh filesystem.

    There is any official procedure to upgrade? How i can install the 3.10 kernel without scramble everything (it's a running production cluster of 10 nodes).
  18. R

    BUG: GFS2 Issue Kernel Panic when delete on a new fresh filesystem.

    Hello, We don't need an OpenVZ Kernel but it's a production installation e we would like to avoid unstable solutions. The 3.10 Kernel it's considered as part of the stable distribution? We have bought almost 15 community licenses to gain some stabilty... Thank you for your answer.
  19. R

    BUG: GFS2 Issue Kernel Panic when delete on a new fresh filesystem.

    Bug confirmed and still present also with latest release 2.6.32-40-pve.
  20. R

    BUG: GFS2 Issue Kernel Panic when delete on a new fresh filesystem.

    Actually on the repo I can find only pve-kernel-3.10.0-10-pve. But i can give it a try on my testing cluster. We don't use openvz containers at all. I'll make some tests and I will let you know how it's going... Ty.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!