Hello,
Thank you for your answer. I have this setup:
4 X NAS with Debian 10 and Gluster 6.4 as servers
10 X ProxmoxVE Nodes Enterprise 6.0-2
10 Gbit networking.
about 350 VMs
Before upgrading to PVE 6.0 I had all nodes upgraded to Gluster 6.4 and the Gluster op.version of every share is...
Downgrading to gluster 5.5 solved the issue but now I can't access to my repo.
I really NEED to upgrade glusterfs to 6.4 with proxmox 6. With the PVE 5.3 aren't any issues.
I just followed the official Glusterfs install guide for Debian buster.
# aptitude dist-upgrade
The following packages...
Hello,
after upgrading i have some packages kept back:
~# apt-get dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages have been kept back:
pve-qemu-kvm qemu-server spiceterm
0 upgraded, 0...
I've managed to upgrade the cluster without any downtime to version 4.0.2-2. I will let you know if other crashes happen.
If I don't see any crash for a week, I will mark this thread as solved.
Thank you for your support.
It's a separate cluster. In detail we have:
-- Gluster Cluster --
4 Nodes each with: Debian stretch - 32 Gb Ram - Xeon E5-1620 - Dual 10Gbe Nic (bonding lacp) - Areca RAID Controller with 24 Disks ( 16 SAS 10K and 8 SATA )
-- Proxmox Cluster --
8 Nodes each with: Proxmox VE 5.1 (latest)...
I've only used the version shipped with the latest Proxmox-ve ISO. Do you think that i could upgrade only the clients instead the whole GlusterCluster? The Cluster is in production and there are ~200 VMs that are running over it. I can try to install the new gluster client over a single node and...
I think so. In this cluster every VM have the HD over a gluster storage. This latest crash regards a VM that usually make a lot of IO but I don't know if this could be related.
Hello fabian,
thank you for the answer. The debsums didn't found any error or md5 mismatch. I've installed the pve-qemu-kvm-dbg and systemd-coredump packages. Do you have any guide or documentation how to use those packages?
Thank you in advance.
Hello,
lately we're experiencing many segfaults across multiple physical machines (Dell R620) that sporadically lead to crash some VM:
[433025.858682] kvm[3158]: segfault at 18 ip 00007feee18b8c70 sp 00007feece5e3e38 error 6 in libpthread-2.24.so[7feee18ab000+18000]
We're using the latest...
It's possible to have a more updated version of gfs-utils? The version 3.1.3 it's full of bugs! I saw many errors in the locking protocol when trying to double mount the filesystem (lock the whole cluster!!!).
Maybe i can downgrade to 3.1.0. I need just to figure out how...
Ty for the answer.
I've tryed again with a real device (i've used a 32 Gb usb pen) and the issue isn't shown.
But the zeroed file is still there. I've delayed the printk messages and this is the panic:
Seems solved with a real device. I've already mounted the production fs without issues. I'm still a bit scared..
Of course I've rebooted. But the kernel you tested is'nt the same:
3.10.0-9-pve != 3.10.0-11-pve
The device isn't a real block device. It's just a zero filled file as you done in your tests.
Hello again,
i've tryed to upgrade the kernel and, without mounting the production fs, i've tryed to create a local gfs2 filesystem to test the issue:
root@VMFO07:~# uname -a
Linux VMFO07 3.10.0-11-pve #1 SMP Tue Jul 21 08:59:46 CEST 2015 x86_64 GNU/Linux
root@VMFO07:~# dd if=/dev/zero...
There is any official procedure to upgrade? How i can install the 3.10 kernel without scramble everything (it's a running production cluster of 10 nodes).
Hello,
We don't need an OpenVZ Kernel but it's a production installation e we would like to avoid unstable solutions. The 3.10 Kernel it's considered as part of the stable distribution?
We have bought almost 15 community licenses to gain some stabilty...
Thank you for your answer.
Actually on the repo I can find only pve-kernel-3.10.0-10-pve. But i can give it a try on my testing cluster.
We don't use openvz containers at all.
I'll make some tests and I will let you know how it's going...
Ty.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.