We had an interesting situation this morning. For some reason one node in our cluster was not showing as active (green "running" arrows on the guest icon on the tree) and all the LXC's were not responding. We managed to address the issue as quickly as possible by simply resetting the node and...
The full isolation of each LXD OS is better than with LXC, not? I think the toolset with LXD is more powerful as well. I may be wrong, but aren't there more startable images available for LXD than for LXC as well?
To allow a tunnel to be established into a container, this post describes a method to do so.
The essence of it is this:
Add to the container config these lines
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net dev/net none bind,create=dir
Then change the /dev/net/tun...
Yes, Indeed, except that in this case the predictable names get swapped counter to the explicit renaming rules I provide. Works perfect on 3 nodes and used to work on the 4th as well. Now it doesn't. Did you see the detail in the reference?
I have 4 pmx identical nodes, of which I have renamed the nic's to more workable eth0, 1, 2, 3. However, after a recent outage in the DC (due to a power test), one of the these nodes swaps eth2 and 3 for no reason that I can find.
Please see...
This is literally a naming bug. If I simply add eth1 to the vmbr0 bridge and use eth2 for corosync, the node works correctly.
I'll wait to see who has an explaination, otherwise I'll file a bug with debian. Or should it be filed with proxmox?
I have actually tested now to swap the config files around and have 0000:18:00.1 named eth1 and 0000:19:00.0 eth2, but the result is unchanged.
ls -la /sys/class/net/eth*
lrwxrwxrwx 1 root root 0 Aug 20 15:07 /sys/class/net/eth0 -> ../../devices/pci0000:17/0000:17:00.0/0000:18:00.0/net/eth0...
Unmarked this thread as 'solved', since I was never able to figure out why this happened in the first instance...
Had a DC power test failure 2 days ago, and now suddenly the problem with NodeB is back.
NodeB can communication on the "LAN" via vmbr0 with other hosts on the 192.168.131.0/24...
I'm creating a virtualised pxc cluster on top of a proxmox installation configured with ceph storage. We are testing some automation with terraform and ansible.
Ideally I would like configure ceph in this nested configuration, however that would be ceph of top of ceph. Will that work...
We use ceph as FS on a cluster with 7 nodes. This cluster is used for testing, development and more. Today one of the nodes died. Since all the LXC and KVM are stored on ceph storage, they are completely there, but the configuration of the guests is not available since it's stored on the node...
I'm not sure what you mean. I use ceph as FS and both LXC and KVM machines are migrated to other nodes easily. I have never noticed any problem with LXC's in this regard?