what does your ip6 routing table look like?
ip -6 ro
from the ip 2a01:4f8:141:3e4::1 guess you're at hetzner, they tend to use fe80::1 as gateway... but you don't - maybe thats your problem?
We did some adjustments (mainly LimitNoFile set to a higher value) and observed the following behaviour:
Everything works well, until we run about 220 guests per node (~20-25 FDs used) - then the prometheus node_exporter in every running guest produces too much "noise" (scraped every 120...
Hi there,
we just upgraded to the 6.2 release with lxc 4.0 and after running about 250 containers on each node, we now get the following error (previously: https://forum.proxmox.com/threads/lxcfs-br0ke-cgroup-limit.69015/#post-309442)
root@lxc-prox4:~# grep -A 5 -B 5 lxcfs /var/log/messages...
Hi there,
i was wondering what the PVESIG in the iptables rules are for.
Is there any sort of "tampering" detection (and mitigation?) or what is it used for?
Hi there,
we're actually running a four node Cluster with about 250 lxc containers on each node (evenly distributed). Primary Storage for almost all containers (except 4) is on the integrated ceph within proxmox.
Kernel Version
Linux 5.3.13-1-pve #1 SMP PVE 5.3.13-1 (Thu, 05 Dec 2019...
We implemented it like this (the pve-guests still takes care of all containers as well, but we dont mind ;) )
Systemd File:
# /etc/systemd/system/startupbooster.service
[Unit]
Description=PVE startup booster
ConditionPathExists=/usr/sbin/pct
RefuseManualStart=true
RefuseManualStop=true...
Hi there,
having >200 LXC Containers on a Server takes some time to bring all up on a Node Reboot, especially when they are started in a serial manner.
Is there a possibility to work with parallel starts to bring up the containers faster?
Hi,
wir haben auf unseren Systemen "immer mehr" lxc container, und bei inzwischen mehr als 200 gab es auf einmal folgenden Fehler:
lxc-execute: 994: utils.c: lxc_setup_keyring: 1898 Disk quota exceeded - Failed to create kernel keyring
Der Fehler lässt sich einfach beheben, wenn man sieht das...
Danke Alwin,
es hat sich jetzt besser ausbalanciert - weitere OSD Disks kommen die Tage auch noch dazu, dann werde ich vermutlich die pg_num nochmal hochdrehn.
-> solved
Hi,
wir betreiben einen 4 Node Cluster mit CEPH als Storage (alles PVE Managed).
Heute morgen ist eine OSD auf Nearfull gesprungen, und der Pool dazu scheinbar auch.
Was ich nicht ganz verstehe: 67% used vom Raw Storage aber 85 vom Pool? Liegt das evtl. am Verschnitt durch "size=3" ...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.