We have a simple LXC machine with CSF installed on it (PVE 7.X)
we are getting the following errors inside the LXC CT
[root@box ~]# /etc/csf/csftest.pl
Testing ip_tables/iptable_filter...OK
Testing ipt_LOG...OK
Testing ipt_multiport/xt_multiport...OK
Testing ipt_REJECT...OK
Testing...
When I do to "Datacenter" on the WebUI and then go the existing `local-zfs` I added the source node (which is lvm set up disks) and even though the `local-zfs` did in fact come up on the source node, its not usable since the `local-zfs` is ZFS and the source node isnt on ZFS filesystem, its on...
We have two node cluster (no HA of course)
Node 1 (older and has many guests VMs/CTs)
Storage: local, local-lvm
Node 2 (new and has no guests at all)
Storage: local, local-zfs
I understand that you can not do a migration between nodes since they dont have "match storage names" (as in...
Another little point, on the first time the container is booted, ont he PVE host I see the following:
284: veth422i0@if5: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue master vmbr0 state LOWERLAYERDOWN group default qlen 1000
But after I reboot the LXC container I see:
285...
Hi all
I am using the default centos7 or Fedora or even centos LXC templates from the proxmox downloader (nothing custom at all) and when the LXC CT is created and strated for the first time, the container does not have the static IP addresses applied. However if you reboot it AFTER it gets...
Hey Team
So I know that we can make a single selected container have PPP working by simply adding into their XXX.conf file the following:
lxc.cgroup.devices.allow: c 108:0 rwm
lxc.mount.entry: /dev/ppp dev/ppp none bind,create=file
However what we want to do is that all existing and new CTs...
The way the LXC configs are loaded:
First you have /usr/share/lxc/config/common.conf and then what ever is in /usr/share/lxc/config/common.conf.d/* and after the OS specific for example /usr/share/lxc/config/debain.common.conf and after that your VMID.conf. thats the order of how its loaded if...
Remove the unprivileged var conf and instead try adding to yur LXC conf the following
lxc.cgroup.devices.allow =
lxc.cgroup.devices.deny =
Let us know how that goes?
Yes and also happens with Ubunutu 16.X too. Just read the docs and I understand what is happen.
I am curious how the pve6to7 tests detects this since the pct conf doesnt state which version of centos or ubunutu they are
Before we upgrade I just want to better understand what the following error means and what it could result in after the upgrade:
WARN: Found at least one CT (174) which does not support running in a unified cgroup v2 layout.
Either upgrade the Container distro or set...
Yeah it’s done on purpose with the addressing.
In theory I should be able to see arp and regular non ip traffic between the vm machines.
If they are both on the same node then it’s working great hence why I tend to point my finger at the switch
Thanks for the reply @ph0x
1) Yes both are on vmbr0 and are vlan aware
2) The VLAN ID is set on both VMs via the webui
3) the switch (Catalyst 2960s) is set up as trunk
Where did I go wrong? my first assumption is the switch config for the ports that both nodes are on, but it seems to be all...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.