Hello. I have two simple improvement ideas for PVE web UI:
1.) In datacenter sumary there is list of nodes. Please add columns with kernel version and proxmox version, so i can easily see if some of the nodes need upgrade/reboot.
2.) in bulk actions (start/stop) please add column showing if...
Btw i've just noticed i have following mountpoint on proxmox nodes:
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime)
There are even some LXC containers listed:
$ ls /sys/fs/cgroup/unified/lxc
102 108 114 201 207 213 219 227 cgroup.freeze...
Aah. I see. I misunderstood this statement. My understanding was that you can't use v1 and v2 in parallel in proxmox infrastructure. Didn't realized this also affects other processes on host (unrelated to proxmox) and possibly even processes running in LXC containers...
I've just checked and...
I've just tried to run docker 18.09.1 from Debian 10 in LXC CT on PVE. I was able to install the portainer on docker in LXC, everything works, i can open the portainer web ui. But then i tried to spin up the docker swarm with portainer and it seems that portainer runs, but the ports are not...
And what pieces are missing? Is that problem in Linux kernel? missing features in cgroup2 LXC?
Or it's just the Proxmox lacking support for latest lxc/linux features?
It seems to me that now (year and half later), the rspamd might be a vital upgrade to proxmox, replacing the classic spamassasin stack.
ISPconfig (another WebUI for configuring mailserver) now has this option enabled by default.
There were some rummors, that this can't be fixed until Proxmox has support for cgroupv2.
But LXC has cgroup v2 support since version 3.0.0, so i guess this can now be fixed in Proxmox VE.
Hello,
i have PVE server with 170GB of RAM assigned to LXC CTs.
Typical CT settings look like this: 2GB RAM, 256GB Swap.
But when you assign CT 2GB of RAM and 256GB of swap, it actually gets 2GB of RAM and 2.25GB of Swap.
If i set the swap to 0GB, it still gets 2GB of swap.
This is just...
But if the network has latency issues for a while, the cluster should come back again once the issue is mitigated. What happens now is that the failure state persists until i restart corosync. It does not self-heal. How is that?
BTW i've increased totem.token in corosync.conf from 1000ms...
How do i test it? It's just gigabit network running on some refurbished (used to be expensive high end hardware) switches with vlans.
If network switch is causing this, how comes it gets stable after 20 minutes and then i never have problems again (unless i am adding another node). Also i don't...
Hello,
when i add new node to cluster it usualy puts whole cluster in very messy state, all nodes are becoming online, offline or unknown state at random and whole cluster goes unquorate. and this completely scary mess lasts for 10 to 20 minutes, then suddenly everything converges and whole...
Well... I run 40+ CTs using it since day one and so far i didn't had any problems. Which means i've tested it for longer time than elapsed between ZFS 8.x stable release and it's inclusion in proxmox :-)
Userspace wireguard implementations will still need some kind of TUN driver to be accessible from LXC. It's readily available in kernel. But you will probably need to grant permissions to that CT, same way as you do, when using OpenVPN...
I understand the point, but to me it does seem there's absolutely nothing wrong with using unicast. At least for clusters with less than 1000 nodes and maybe even then the performance gain might not be that huge... It might be more user friendly if proxmox just stoped pushing users to multicast...
Leave omping running for 10 minutes... If it will stop printing "multicast" lines, there's some problem.
Also try disabling multicast snooping on your bridges (and switches?):
echo 0 > /sys/devices/virtual/net/vmbr0/bridge/multicast_snooping
i was blaming the switch all the time. turned out...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.