I've been discussing this with corosync developers and they've told me this:
https://github.com/corosync/corosync/issues/465
TLDR:
Multicast was only reccomended for corosync 1.x, because unicast was not tested yet
For corosync 2.x, they reccomend to use unicast (Proxmox currently uses...
pvecm qdevice setup MY_IP_ADDR
says this:
INFO: initializing qnetd server
bash: corosync-qnetd-certutil: command not found
I've tried installing corosync-qnetd and corosync-qdevice but it's still not working
!!!UPDATE: SOLUTION: corosync-qnetd and corosync-qdevice has to be installed on all...
Currently Proxmox VE only enables us to set QUOTA (space used including snapshots), but in many setups it also makes sense to set REFQUOTA (space used excluding snapshots).
I have deployed znapzend for automatic ZFS snapshoting and replication, but these automatic snapshots are eating up...
Let's say the server crashed for unknown reason, i've got into the datacenter to investigate what happend. I want to boot the server without starting CTs/VMs on system in unknown state. Maybe there's something wrong with hardware and it will be very slow or even lead to data loss if i start all...
When i do some kind of service work on my server, sometimes i want to boot the system to do some changes, but i know that i will need reboot again few more time, so i don't want to start the CTs and VMs yet. Is there some flag that i can specify in grub to boot without autostarting VMs or CTs...
I have troubles with upgrade to PVE 5.4, it launches /bin/systemd-tty-ask-password-agent --watch on several occasions and hangs forever. It happens when configuring pve-ha-manager and pve-manager packages. Freeze during pve-ha-manager is especially painful as it leads to unwanted reboot when HA...
Do you plan to handle separate swap limit accounting using cgroupv2? LXC has cgroupv2 support since version 3.
Currently i can't use proxmox to configure LXC with swap limit smaller than total ram limit. Swap limit is now always "Memory+Swap"
I think i've found possible culprit of this problem. KSM only deduplicates (merges) memory pages that were flagged with MADV_MERGEABLE flag using madvise() syscall. Recent QEMU versions are using madvise() to advise memory pages used by VMs to be merged.
KSM is available in mainline kernel...
Yes. I understand this. KSM should not be about VMs or CTs at all. It should detect all cases in which there are duplicate memory pages. That is exactly why i started this thread.
Because i can't get KSM to scan for duplicate pages at all.
full_scans:0 means that KSM didn't even tried to find...
I was experimenting with KSM. I wonder if it can work on PVE with lots of LXC containers running same apps. Eg.: lots of apaches. I've enabled ksmtuned with threshold of 50%, it seems to run=1, but pages_shared:0 means it's not sharing and full_scans:0 probably means it didn't even tried to find...
Well. There's no way of installing kernel module to container. That's just not how it works. So as you said... VM is way to go if you don't mind slight performance penalty when compared with CT.
But still. I wonder if it poses real security threat to have modules like fuse, openvpn or wireguard...
I tested and it seems to work as expected! At least in w, htop and nagios nrpe check_load it reports proper values. Thats great!
Now i wonder if there's proper way for the -l parameter to survive upgrade of lxcfs package...
update: it can be done using systemd overrides:
execute systemctl...
Yes. Now it showed up. I think it was cached somewhere... When i've been trying few hours ago i did apt update and dist-upgrade and it didn't updated lxcfs... Now it did. Thanks for support. I will try it now.
I've just upgraded and in man lxcfs i don't see the -l flag. I guess it's still in pvetest repository and waiting to get to stable repos? I don't really know how proxmox releases are done...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.