Hi,
I have a small PVE Cluster with 3 nodes. On every node are ~ 200 lxc CTs running.
When I reboot one of these node for any reason, all nodes in the cluster get a grey ? or a red X next to their names.
In syslog I see something like:
May 16 11:39:39 captive001-72001-bl03 corosync[1802]...
Hi,
is there a way to find a CT with its MAC?
At least the pve search box does not support searching by MACs. Is there any command on the node I could try?
ip a | grep "<mac>"
Does not work.
Thank you in adcance
- What exactly does "it did not work great with inferior ssds" mean? What issues did you have?
- Your VPS are running on these Ceph Cluster as well or is it only for storaging your VPS?
Wouldn't it be possible to set up a separate Ceph cluster with 3 storage nodes? And 10 Bladecenter Nodes where the CTs will run (not stored)? For example, each ceph node get 8x 1 TB SSDs (plus the OS hard disks). I put this Ceph cluster into Proxmox. Is it possible to store all Bladecenter CTs...
Thanks for the hint. To be honest, I have never worked with ceph before. I've just been reading in. If I understand it correctly, each CT node need to run as a storage server too? If not, we still need at least 3 separate storage servers, right?
The bladecenter servers we use where the container...
Thanks for your fast reply aleskysilk.
My request was more a general request regarding LXC containers on a shared storage. I think there are not too many option I have compared with KVM VMs on a shared storage. By now I am just searching for a working solution as an ext4 partition shared by...
Hi guys,
we've been playing around with Proxmox for a few weeks and want to get our entire infrastructure (3500 vps) away from SolusVM. Proxmox seems very flexible to us but so far we have not found satisfactory shared solutions for LXC.
So far we tried to create a zpool, set up en zvol there...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.