Search results

  1. H

    Corosync 3.x: Multicast (for now) obsolete, use of Unicast (or knet) is reccomended

    I've been discussing this with corosync developers and they've told me this: https://github.com/corosync/corosync/issues/465 TLDR: Multicast was only reccomended for corosync 1.x, because unicast was not tested yet For corosync 2.x, they reccomend to use unicast (Proxmox currently uses...
  2. H

    PVE5 and quorum device

    pvecm qdevice setup MY_IP_ADDR says this: INFO: initializing qnetd server bash: corosync-qnetd-certutil: command not found I've tried installing corosync-qnetd and corosync-qdevice but it's still not working !!!UPDATE: SOLUTION: corosync-qnetd and corosync-qdevice has to be installed on all...
  3. H

    ZFS Quota vs. Refquota

    Thanks. Done: https://bugzilla.proxmox.com/show_bug.cgi?id=2201
  4. H

    ZFS Quota vs. Refquota

    Currently Proxmox VE only enables us to set QUOTA (space used including snapshots), but in many setups it also makes sense to set REFQUOTA (space used excluding snapshots). I have deployed znapzend for automatic ZFS snapshoting and replication, but these automatic snapshots are eating up...
  5. H

    Inhibit VM/CT autostart from GRUB kernel cmdline

    Let's say the server crashed for unknown reason, i've got into the datacenter to investigate what happend. I want to boot the server without starting CTs/VMs on system in unknown state. Maybe there's something wrong with hardware and it will be very slow or even lead to data loss if i start all...
  6. H

    Inhibit VM/CT autostart from GRUB kernel cmdline

    When i do some kind of service work on my server, sometimes i want to boot the system to do some changes, but i know that i will need reboot again few more time, so i don't want to start the CTs and VMs yet. Is there some flag that i can specify in grub to boot without autostarting VMs or CTs...
  7. H

    LVM-Thin will eat your data?

    After last reboot of proxmox machine running LVM thin, i've got into following problem: [ 15.801293] device-mapper: table: 253:11: thin: Couldn't open thin internal device [ 15.810909] device-mapper: ioctl: error adding target to table [ 15.829579] device-mapper: table: 253:11: thin...
  8. H

    Recovering from lvm thin metadata exhaustion

    Have you managed to resolve this?
  9. H

    Proxmox VE 5.4 released!

    BTW why there's still qemu 2 in proxmox? there is already qemu 4.0.0...
  10. H

    Freeze upgrading to PVE 5.4

    I have troubles with upgrade to PVE 5.4, it launches /bin/systemd-tty-ask-password-agent --watch on several occasions and hangs forever. It happens when configuring pve-ha-manager and pve-manager packages. Freeze during pve-ha-manager is especially painful as it leads to unwanted reboot when HA...
  11. H

    Proxmox VE 5.4 released!

    Do you plan to handle separate swap limit accounting using cgroupv2? LXC has cgroupv2 support since version 3. Currently i can't use proxmox to configure LXC with swap limit smaller than total ram limit. Swap limit is now always "Memory+Swap"
  12. H

    LXC container using more than 100% SWAP

    LXC has cgroup v2 support since version 3.0.0, so i guess this can now be fixed in Proxmox VE.
  13. H

    KSM - Will it work for LXC?

    I think i've found possible culprit of this problem. KSM only deduplicates (merges) memory pages that were flagged with MADV_MERGEABLE flag using madvise() syscall. Recent QEMU versions are using madvise() to advise memory pages used by VMs to be merged. KSM is available in mainline kernel...
  14. H

    KSM - Will it work for LXC?

    Yes. I understand this. KSM should not be about VMs or CTs at all. It should detect all cases in which there are duplicate memory pages. That is exactly why i started this thread. Because i can't get KSM to scan for duplicate pages at all. full_scans:0 means that KSM didn't even tried to find...
  15. H

    KSM - Will it work for LXC?

    I was experimenting with KSM. I wonder if it can work on PVE with lots of LXC containers running same apps. Eg.: lots of apaches. I've enabled ksmtuned with threshold of 50%, it seems to run=1, but pages_shared:0 means it's not sharing and full_scans:0 probably means it didn't even tried to find...
  16. H

    [SOLVED] RTNETLINK Operation not supported - ubuntu lxc container

    Well. There's no way of installing kernel module to container. That's just not how it works. So as you said... VM is way to go if you don't mind slight performance penalty when compared with CT. But still. I wonder if it poses real security threat to have modules like fuse, openvpn or wireguard...
  17. H

    LXC loadavg

    I tested and it seems to work as expected! At least in w, htop and nagios nrpe check_load it reports proper values. Thats great! Now i wonder if there's proper way for the -l parameter to survive upgrade of lxcfs package... update: it can be done using systemd overrides: execute systemctl...
  18. H

    LXC loadavg

    Yes. Now it showed up. I think it was cached somewhere... When i've been trying few hours ago i did apt update and dist-upgrade and it didn't updated lxcfs... Now it did. Thanks for support. I will try it now.
  19. H

    LXC loadavg

    I've just upgraded and in man lxcfs i don't see the -l flag. I guess it's still in pvetest repository and waiting to get to stable repos? I don't really know how proxmox releases are done...
  20. H

    LXC loadavg

    masking loadavg is not really about security... at least in my case. it's more about creating best possible illusion of separate server.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!