Search results

  1. D

    API: Setting features on lxc/VMID/config

    I see, thank you for responding. It would help to document that on the API documentation.
  2. D

    API: Setting features on lxc/VMID/config

    I am trying to define features to be e.g. keyctl=1 on LXC container's config, either while creating it or after creation. I can see this working from my web browser when I'm logged in as root@pam. However, when I call this from a new user created in the pve realm with all permissions assigned...
  3. D

    Kernel panic: ip_set_hash_net

    There is a new problem on this same server that we had. Maybe related to this, maybe not. We were running fine on 4.15.18-8-pve and when we upgraded to 4.15.18-9-pve, the network does not work at all - no errors in console or logs specific to network but there is absolutely no network...
  4. D

    Kernel panic: ip_set_hash_net

    Sure, that is fine. Thanks!
  5. D

    Kernel panic: ip_set_hash_net

    This is the script: https://github.com/DevelopersPL/pkgbuild/blob/master/ipset-nuclear/ipset-nuclear You only need ipset installed to run it. It's running on Arch Linux unprivileged container with 4 GB RAM, 2 GB SWAP, RAM is initially at 1.5 GB use (the specs shouldn't matter). Host is at 21/32...
  6. D

    Kernel panic: ip_set_hash_net

    Don't really see much, that's all I see: # cat /proc/27695/stack [<0>] 0xffffffffffffffff Arch Linux. I also noticed that on the host, the memory does in fact approach 100%. Then I have SWAP (half of RAM) so it doesn't crash right away. The interesting thing is that I can't tell why the...
  7. D

    Kernel panic: ip_set_hash_net

    I've tracked this down to a specific ipset command running in unprivileged LXC container. On the host, when it's not crashing, it "looks" like this: Message from syslogd@pve at Jan 1 19:25:29 ... kernel:[86763.415164] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [ipset:11633] Message...
  8. D

    Kernel panic: ip_set_hash_net

    And a new one from last night: root@pve:~# journalctl -b -1 --no-pager --since '2018-12-29 07:00:00' -- Logs begin at Thu 2018-12-06 17:24:36 UTC, end at Sat 2018-12-29 15:13:01 UTC. -- Dec 29 07:00:00 pve systemd[1]: Starting Proxmox VE replication runner... Dec 29 07:00:01 pve CRON[4976]...
  9. D

    Kernel panic: ip_set_hash_net

    Let me try that again. It looks like the journalctl "pager" is causing the "skipped". But still, the messages sometimes appear to be mangled in the journal itself. Here's the oldest one (2018-12-10): root@pve:~# journalctl -b -4 --no-pager --since '2018-12-10 17:35:00' -- Logs begin at...
  10. D

    Kernel panic: ip_set_hash_net

    Thanks for suggestion but that does not seem to be working: root@pve:~# apt install r8168-dkms Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: r8168-dkms 0 upgraded, 1 newly installed, 0 to remove...
  11. D

    Kernel panic: ip_set_hash_net

    I'd love to upgrade BIOS but that may be impossible. The latest update is from 2015 anyway. It's an OVH dedicated server. CPU: Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz Motherboard: ASUSTeK COMPUTER INC. Product Name: P8H77-M PRO Version: Rev X.0x
  12. D

    Kernel panic: ip_set_hash_net

    We have a server that is typically running for 6 months between reboots to upgrade kernel. In other words, it's very stable and has been running for 4 years like that. Recently, we've had an unexpected hangup/crash-reboot. Upon that, a new kernel was loaded but the situation re-occured. Then a...
  13. D

    Shrinking ZFS filesystem for LXC CT

    Howdy, I have given Proxmox on ZFS a test run recently (having used Proxmox for many users without ZFS). I was hoping that Proxmox on ZFS would allow LXC containers to have their disk allocation both extended and shrunk. However, the Proxmox GUI does not seem to allow that. Running the...
  14. D

    Proxmox VE - Support Lifecycle

    I will be happy to sell you "LTS" version so you can continue using the same Proxmo version for 20 more years (as if I could prevent you from doing so...). Just remember that I don't actually plan on doing anything for you (besides sending an invoice). But everything is working smoothly for you...
  15. D

    Unable to create new inotify object: Too many open files at /usr/share/perl5 ...

    I am also hitting this problem with LXC containers. I had just 19 of them and pveproxy service died because of that:
  16. D

    Issue with pct after exiting lxc

    Same problem occurs for me. "clear" or "reset" help.
  17. D

    Kernel panic (PVE 4.1 / 4.2.6-1-pve)

    Today we had one of the Proxmox hosts stop with the following message: The host was still running kernel 4.2.6-1-pve (package version: 4.2.6-26) because we did not reboot into the newer kernel that was installed already (4.2.6-28). I'm just posting this in hopes to verify if that is a known...
  18. D

    OVH : more than one ip failover on the same lxc container

    pa657, I did a quick test for you on my Proxmox machine in OVH. To my Debian-based LXC container, I added another interface (net1, eth1) with a different MAC and another IPv4 (I was adding it when the container was stopped). After I started the container it successfully replies to pings to...
  19. D

    [SOLVED] LXC containers can't access 169.254.169.254 bound to lo on host

    My use case involves having LXC containers connect to 169.254.169.254 (inside the guest). This IP (169.254.169.254) is added to "lo" interface on host (ip a add 169.254.169.254 dev lo). On Proxmox 3, this worked well. I had a server running on host, binding to 169.254.169.254:80 and guests were...
  20. D

    OVH : more than one ip failover on the same lxc container

    If you are using both IPs on the same interface, did you configure the same virtual MAC for both failover IPs?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!