Search results

  1. L

    Allow OpenVPN / Wireguard connection to an lxc

    To allow a tunnel to be established into a container, this post describes a method to do so. The essence of it is this: Add to the container config these lines lxc.cgroup2.devices.allow: c 10:200 rwm lxc.mount.entry: /dev/net dev/net none bind,create=dir Then change the /dev/net/tun...
  2. L

    Ethernet ports swapped with no discernible reason

    Yes, Indeed, except that in this case the predictable names get swapped counter to the explicit renaming rules I provide. Works perfect on 3 nodes and used to work on the 4th as well. Now it doesn't. Did you see the detail in the reference?
  3. L

    Ethernet ports swapped with no discernible reason

    I have 4 pmx identical nodes, of which I have renamed the nic's to more workable eth0, 1, 2, 3. However, after a recent outage in the DC (due to a power test), one of the these nodes swaps eth2 and 3 for no reason that I can find. Please see...
  4. L

    [BUG] Network is only working selectively, can't see why

    This is literally a naming bug. If I simply add eth1 to the vmbr0 bridge and use eth2 for corosync, the node works correctly. I'll wait to see who has an explaination, otherwise I'll file a bug with debian. Or should it be filed with proxmox?
  5. L

    [BUG] Network is only working selectively, can't see why

    I've been working through https://wiki.debian.org/NetworkInterfaceNames to try to find a solution.
  6. L

    [BUG] Network is only working selectively, can't see why

    I have actually tested now to swap the config files around and have 0000:18:00.1 named eth1 and 0000:19:00.0 eth2, but the result is unchanged. ls -la /sys/class/net/eth* lrwxrwxrwx 1 root root 0 Aug 20 15:07 /sys/class/net/eth0 -> ../../devices/pci0000:17/0000:17:00.0/0000:18:00.0/net/eth0...
  7. L

    [BUG] Network is only working selectively, can't see why

    Here's a strange discovery. root@FT1-NodeA:~# udevadm info /sys/class/net/eth0 | grep ID_PATH E: ID_PATH=pci-0000:18:00.0 E: ID_PATH_TAG=pci-0000_18_00_0 root@FT1-NodeA:~# udevadm info /sys/class/net/eth1 | grep ID_PATH E: ID_PATH=pci-0000:18:00.1 E: ID_PATH_TAG=pci-0000_18_00_1...
  8. L

    [BUG] Network is only working selectively, can't see why

    Unmarked this thread as 'solved', since I was never able to figure out why this happened in the first instance... Had a DC power test failure 2 days ago, and now suddenly the problem with NodeB is back. NodeB can communication on the "LAN" via vmbr0 with other hosts on the 192.168.131.0/24...
  9. L

    Nested pmx cluster with ceph?

    I went ahead and just did it, and it works quite well. Not sure what performance penalty it incurs, but that's not the point of our test.
  10. L

    Nested pmx cluster with ceph?

    I'm creating a virtualised pxc cluster on top of a proxmox installation configured with ceph storage. We are testing some automation with terraform and ansible. Ideally I would like configure ceph in this nested configuration, however that would be ceph of top of ceph. Will that work...
  11. L

    [SOLVED] Redundancy fails when node fails?

    Thanks! I could not find that, probably because I couldn't figure out what to search for. Problem solved!
  12. L

    [SOLVED] Redundancy fails when node fails?

    We use ceph as FS on a cluster with 7 nodes. This cluster is used for testing, development and more. Today one of the nodes died. Since all the LXC and KVM are stored on ceph storage, they are completely there, but the configuration of the guests is not available since it's stored on the node...
  13. L

    [SOLVED] Create a virtual cluster in lxc's?

    Ah, yes, I just checked and it does happen although it's quick.
  14. L

    [SOLVED] Create a virtual cluster in lxc's?

    I'm not sure what you mean. I use ceph as FS and both LXC and KVM machines are migrated to other nodes easily. I have never noticed any problem with LXC's in this regard?
  15. L

    [SOLVED] Create a virtual cluster in lxc's?

    Yes, indeed, I'm doing that at the moment. I prefer to use LXC's whenever possible, which it why I gave it a shot...
  16. L

    [SOLVED] Create a virtual cluster in lxc's?

    I actually did install it, and all seems to be installable (using the installation instructions for Proxmox on Debian Bulleye), but corosync doesn't run despite using separate VLAN for the lxc's... I was hoping to that there is a way around whatever the problem is. * corosync.service -...
  17. L

    [SOLVED] Create a virtual cluster in lxc's?

    Hi all, Is it possible to create a virtual proxmox cluster in lxc instances? I'm planning to create a test cluster to experiment with Terraform, so if I can do that in 3 linux containers (create a node in each), it would be the lowest resource usage. Of course I can use full Qemu KVM guests...
  18. L

    [SOLVED] Ceph pool size and OSD data distribution

    I changed the weight of the OSD to be 30% less and that rebalanced the data nicely, so a little manual tuning had the desired effect.
  19. L

    [SOLVED] Ceph pool size and OSD data distribution

    I'm afraid the drive tech of these machines is substantial different. The S1 is a Sunfire X4150 with 2.5" SAS drives, whereas the HP is a ProLiant DL320s G1 with 5.25" SATA drives :-) I'm going to try to adjust the weight of the OSD that's too full to see if I can bring it down that way...
  20. L

    [SOLVED] Ceph pool size and OSD data distribution

    Note: This is more of an effort to understand the system works, than to get support. I know PVE 5 is not supported anymore... I have a 7 node cluster which is complaining that: root@s1:~# ceph -s cluster: id: a6092407-216f-41ff-bccb-9bed78587ac3 health: HEALTH_WARN 1...