After upgrading to PVE 8 and several CTs to Debian 12, I experience a loadavg discrepancy. The load averages shown in the CTs are that of the host, divided by the number of CPU cores of the LXC container (per-core loadavg in Zabbix), instead of the load of the LXC guest itself. The commands...
Well, for whomever finds this thread among the other similar ones, the issue was solved by fixing /etc/hosts to align the host names with the desired subnet, and then restarting pve-cluster.service and then corosync.service. Replication and other heavy traffic is flowing on the network it was...
I made a mistake when I created a 5-node cluster. I added all nodes by IP to the cluster and `pvecm status` now shows:
Cluster information
-------------------
Name: cluster2
Config Version: 5
Transport: knet
Secure auth: on
Quorum information
------------------
Date...
OK, it was actually a pretty lame issue. I had to clean up the garbage in GRUB_CMDLINE_LINUX in /etc/default/grub and presto, booting like it should. I leave this "solution" here for search engines.
I'm struggling with a seemingly simple task. I've copied the contents of an old Debian 8/Jessie VM from Azure to a PVE 6.1-7 host using rsync. I've set up boot, etc., and the VM does start booting, but I see the messages per attached image and it's stuck there forever. The root partition is a...
I've upgraded a 5.2 system to latest 6.3 and experiencing something I've never before. It might or might not be PVE related, I have no idea. The gist of it, is I have a custom iptables firewall that loads without problems and does work. I had to allow a new IP for admin access, something like...
Apparently, this is the way it should be with cgroup v1. Cgroup v2 is supposed to show swap separately again but it's not active in PVE right now. It's unexplained why it was shown correctly (separately, as it should be in v2) in earlier 6.x versions. It's a mess, anyway. See also here.
Not now, as the system is live and people started working on it, but as an idea, I might try later. But i was under the impression that nesting is only necessary when processes try to create security namespaces and other security measures using cgroup inside CTs so my gut feeling says it won't help.
It has nothing to do with the problem at hand. It's just the tun device config, which is working fine. There's a small, but very definite difference in the CT vs. the older version of pve-lxc. The following mount entry is not present in the newer version:
proc on /proc/sys/net type proc...
Same issue here with a CT running OpenVPN. It needs to enable IP forwarding in the kernel (e.g. /proc/sys/net/ipv4/ip_forward). Fails with write permission error. Quite the issue, it kills most network/routing oriented containers. Will try downgrading in the meantime.
Downgrading...
I've just upgraded a server to PVE 6.3 (pve-manager/6.3-3/eee5f901 (running kernel: 5.4.78-2-pve)) and see a sudden change in how LXC swap amount is presented. For a long time in older versions in LXC (maybe starting with 4.x) the swap size was shown and calculated as RAM+swap, but thankfully...
@ThinkAgain
There are 2 more factors to take into account. First, you seem to use ZFS where commits from the ZIL cause double writes on the pool (not always; it's complicated). The Total_LBAs_Written attribute is total host writes, not taking write amplification into account (double writes are...
It's normal, because part1-2 is used for EFI systeam partition and boot, respectively (if I remember correctly). And it's better to have a partition table anyway to avoid "accidents" with bare disks.
SLAAC provides a dynamic address. At least it used to, in that environment. Now they seem to provide a dynamically assigned, but static EUI-64 address for the servers (makes a lot more sense, honestly). I can add that to the hosts file, but see my other concerns. Plus I see the whole fiddling...
The systems are dual stack, not IPv6-only. If I add the v6 address as well, won't it create confusion in the cluster communication and other things?
On the other hand, the HNs use dynamic v6 addresses via SLAAC. This is a technical limitation in that system. So I can't just add an address...
I'd like to access my PVE servers on their IPv6 addresses in a DC with native IPv6 available. Currently it's impossible as the pveproxy service is not listening on IPv6. The systems have proper dual stack setup, default (high) IPv6 preferences. Could it be fixed so the pveproxy service listens...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.