To allow a tunnel to be established into a container, this post describes a method to do so.
The essence of it is this:
Add to the container config these lines
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net dev/net none bind,create=dir
Then change the /dev/net/tun...
Yes, Indeed, except that in this case the predictable names get swapped counter to the explicit renaming rules I provide. Works perfect on 3 nodes and used to work on the 4th as well. Now it doesn't. Did you see the detail in the reference?
I have 4 pmx identical nodes, of which I have renamed the nic's to more workable eth0, 1, 2, 3. However, after a recent outage in the DC (due to a power test), one of the these nodes swaps eth2 and 3 for no reason that I can find.
Please see...
This is literally a naming bug. If I simply add eth1 to the vmbr0 bridge and use eth2 for corosync, the node works correctly.
I'll wait to see who has an explaination, otherwise I'll file a bug with debian. Or should it be filed with proxmox?
I have actually tested now to swap the config files around and have 0000:18:00.1 named eth1 and 0000:19:00.0 eth2, but the result is unchanged.
ls -la /sys/class/net/eth*
lrwxrwxrwx 1 root root 0 Aug 20 15:07 /sys/class/net/eth0 -> ../../devices/pci0000:17/0000:17:00.0/0000:18:00.0/net/eth0...
Unmarked this thread as 'solved', since I was never able to figure out why this happened in the first instance...
Had a DC power test failure 2 days ago, and now suddenly the problem with NodeB is back.
NodeB can communication on the "LAN" via vmbr0 with other hosts on the 192.168.131.0/24...
I'm creating a virtualised pxc cluster on top of a proxmox installation configured with ceph storage. We are testing some automation with terraform and ansible.
Ideally I would like configure ceph in this nested configuration, however that would be ceph of top of ceph. Will that work...
We use ceph as FS on a cluster with 7 nodes. This cluster is used for testing, development and more. Today one of the nodes died. Since all the LXC and KVM are stored on ceph storage, they are completely there, but the configuration of the guests is not available since it's stored on the node...
I'm not sure what you mean. I use ceph as FS and both LXC and KVM machines are migrated to other nodes easily. I have never noticed any problem with LXC's in this regard?
I actually did install it, and all seems to be installable (using the installation instructions for Proxmox on Debian Bulleye), but corosync doesn't run despite using separate VLAN for the lxc's... I was hoping to that there is a way around whatever the problem is.
* corosync.service -...
Hi all,
Is it possible to create a virtual proxmox cluster in lxc instances? I'm planning to create a test cluster to experiment with Terraform, so if I can do that in 3 linux containers (create a node in each), it would be the lowest resource usage. Of course I can use full Qemu KVM guests...
I'm afraid the drive tech of these machines is substantial different. The S1 is a Sunfire X4150 with 2.5" SAS drives, whereas the HP is a ProLiant DL320s G1 with 5.25" SATA drives :-)
I'm going to try to adjust the weight of the OSD that's too full to see if I can bring it down that way...
Note: This is more of an effort to understand the system works, than to get support. I know PVE 5 is not supported anymore...
I have a 7 node cluster which is complaining that:
root@s1:~# ceph -s
cluster:
id: a6092407-216f-41ff-bccb-9bed78587ac3
health: HEALTH_WARN
1...