This is happening to me too. We have 7 very different servers, though all intel processors, and they are all crashing intermittently since earlier today. Sometimes a machine stays up for a few minutes, at other times it stays up for more than half an hour.
Yeah, that is exactly what we were doing. We used ZFS to do replication of the docker-in-LXC services between servers. I guess we need to rebuild that in qemu now, but docker-in-lxc on ZFS, even with AUFS was just super easy in terms of configuration, maintenance, replication. It even enabled a...
Hello, I have the same problem with all LXC containers using systemd.unified_cgroup_hierarchy=0.
We use docker-in-lxc and I thought it would be an easy way to avoid the cgroupv2 issue.
Example of such an LXC
pct config 150
arch: amd64
cores: 2
features: fuse=1,mknod=1,nesting=1
hostname: DDNS...
Thanks again for your reply Alwin. I appear to have done something wrong, as Ceph is now not working at all.
I stopped all ceph services using sudo /etc/init.d/ceph -v -a stop before editing ceph.conf, saved and gave sudo /etc/init.d/ceph -v -a start after which I rebooted each node.
After the...
Thanks again Alwin for your reply!
Perhaps I have misunderstood the concept of public network.
I will change this value to be in the 10.10.10.x range like the cluster network and report back.
EDIT:
So, this seems to have indeed helped a bit, but just as I wanted to write that, this happened...
Hello Alwin, thanks for your reply. My network configuration, from one of the nodes, is as follows:
auto lo
iface lo inet loopback
iface enp2s0 inet manual
iface eno1 inet manual
iface eno1d1 inet manual
auto bond0
iface bond0 inet static
address 10.10.10.3
netmask...
Hello, I have been having a very strange problem with some of my Proxmox nodes the past few days. This problem has seemingly started suddenly after having the current configuration running for at least three months.
Some Proxmox nodes are suddenly no longer able to communicate with one another...
Hello Twinsen, I am in somewhat the same situation as you. One thing that I have found that might interest you is that cephFS can be shared out using SMB/Cifs. If you do this on all your nodes and use DNS load balancing or possibly by using a virtual ip (but I do not yet know how to do that), I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.