Both problematic nodes are still using the older ifupdown.
IMHO this should be added to https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0#Actions_step-by-step
Some nodes refused to connect with the rest of the cluster. The issue was caused by the old netmask syntax in the /etc/network/interfaces config:
auto vmbr0
iface vmbr0 inet static
address 192.168.1.1
gateway 192.168.100.254
netmask 255.255.0.0
bridge_ports...
How much data do you plan per domain because 960GB would yield ~15GB. I'd go for two smaller SSDs using mdadm for the OS and two 480GB SSDs with ZFS RAID1. And don't forget about ZFS compression feature which will boost your usable storage capacity even more.
Seems like journald was the culprit. It filled `/run/log/journal` with its logs which caused the tmpfs of /run to consume the whole memory dedicated to the container.
root@dnsmasq:~# journalctl --disk-usage
Archived and active journals take up 144.0M on disk.
root@dnsmasq:~# df -h
Filesystem...
I'm running Proxmox 5.1 with ZFS as the storage backend and i can't wrap my head around the memory usage reported by the OOM killer:
Nov 02 07:19:54 srv-01-1 kernel: Task in /lxc/111 killed as a result of limit of /lxc/111
Nov 02 07:19:54 srv-01-1 kernel: memory: usage 1048576kB, limit...
If the "swap" parameter is actually memory+swap this example would make no sense to me:
Memory: 1024MB
Swap: 0MB
How can memory+swap be set to 0MB if the memory is set to 1024MB?
Could somebody elaborate on how to configure the LXC memory settings properly?
The UI allows me to set the memory limit of my container to e.g 1024MB and 0MB for swap. According to the documentation this would be wrong:
Is the UI missing a sanity check or am i misinterpreting the documentation?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.