Search results

  1. F

    PVE crashing with worker # started message

    This is happening to me too. We have 7 very different servers, though all intel processors, and they are all crashing intermittently since earlier today. Sometimes a machine stays up for a few minutes, at other times it stays up for more than half an hour.
  2. F

    Proxmox VE 7.0 released!

    Yeah, that is exactly what we were doing. We used ZFS to do replication of the docker-in-LXC services between servers. I guess we need to rebuild that in qemu now, but docker-in-lxc on ZFS, even with AUFS was just super easy in terms of configuration, maintenance, replication. It even enabled a...
  3. F

    Migration from 6.4 to 7.0

    Hello, I have the same problem with all LXC containers using systemd.unified_cgroup_hierarchy=0. We use docker-in-lxc and I thought it would be an easy way to avoid the cgroupv2 issue. Example of such an LXC pct config 150 arch: amd64 cores: 2 features: fuse=1,mknod=1,nesting=1 hostname: DDNS...
  4. F

    Nodes unable to maintain communication causing Ceph to fail

    Thanks again for your reply Alwin. I appear to have done something wrong, as Ceph is now not working at all. I stopped all ceph services using sudo /etc/init.d/ceph -v -a stop before editing ceph.conf, saved and gave sudo /etc/init.d/ceph -v -a start after which I rebooted each node. After the...
  5. F

    Nodes unable to maintain communication causing Ceph to fail

    Thanks again Alwin for your reply! Perhaps I have misunderstood the concept of public network. I will change this value to be in the 10.10.10.x range like the cluster network and report back. EDIT: So, this seems to have indeed helped a bit, but just as I wanted to write that, this happened...
  6. F

    Nodes unable to maintain communication causing Ceph to fail

    Hello Alwin, thanks for your reply. My network configuration, from one of the nodes, is as follows: auto lo iface lo inet loopback iface enp2s0 inet manual iface eno1 inet manual iface eno1d1 inet manual auto bond0 iface bond0 inet static address 10.10.10.3 netmask...
  7. F

    Nodes unable to maintain communication causing Ceph to fail

    Hello, I have been having a very strange problem with some of my Proxmox nodes the past few days. This problem has seemingly started suddenly after having the current configuration running for at least three months. Some Proxmox nodes are suddenly no longer able to communicate with one another...
  8. F

    Mounting ceph pool for total beginner

    Hello Twinsen, I am in somewhat the same situation as you. One thing that I have found that might interest you is that cephFS can be shared out using SMB/Cifs. If you do this on all your nodes and use DNS load balancing or possibly by using a virtual ip (but I do not yet know how to do that), I...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!