Search results

  1. O

    Old netmask format causes issues after PVE 7 upgrade

    Both problematic nodes are still using the older ifupdown. IMHO this should be added to https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0#Actions_step-by-step
  2. O

    Old netmask format causes issues after PVE 7 upgrade

    Some nodes refused to connect with the rest of the cluster. The issue was caused by the old netmask syntax in the /etc/network/interfaces config: auto vmbr0 iface vmbr0 inet static address 192.168.1.1 gateway 192.168.100.254 netmask 255.255.0.0 bridge_ports...
  3. O

    Container process being OOM killed

    Done: https://bugzilla.proxmox.com/show_bug.cgi?id=1597
  4. O

    Container process being OOM killed

    No, i didn't because i'm not quite sure who's in charge of the standard Ubuntu Template. Is it maintained by Proxmox?
  5. O

    Container process being OOM killed

    I would think so. The overall problem is that the tmpfs percentage limit isn't working properly.
  6. O

    Proxmox VE 5.1 released!

    Does the latest pve-test kernel fix this issue?
  7. O

    Best setup for 4xSSD RAID10

    The performance of LVM with snapshots tends not to be the best.
  8. O

    Best setup for 4xSSD RAID10

    How much data do you plan per domain because 960GB would yield ~15GB. I'd go for two smaller SSDs using mdadm for the OS and two 480GB SSDs with ZFS RAID1. And don't forget about ZFS compression feature which will boost your usable storage capacity even more.
  9. O

    Best setup for 4xSSD RAID10

    Which relies heavily on the intended use-case.
  10. O

    Container process being OOM killed

    Seems like journald was the culprit. It filled `/run/log/journal` with its logs which caused the tmpfs of /run to consume the whole memory dedicated to the container. root@dnsmasq:~# journalctl --disk-usage Archived and active journals take up 144.0M on disk. root@dnsmasq:~# df -h Filesystem...
  11. O

    Container process being OOM killed

    I'm running Proxmox 5.1 with ZFS as the storage backend and i can't wrap my head around the memory usage reported by the OOM killer: Nov 02 07:19:54 srv-01-1 kernel: Task in /lxc/111 killed as a result of limit of /lxc/111 Nov 02 07:19:54 srv-01-1 kernel: memory: usage 1048576kB, limit...
  12. O

    LXC container memory configuration

    If the "swap" parameter is actually memory+swap this example would make no sense to me: Memory: 1024MB Swap: 0MB How can memory+swap be set to 0MB if the memory is set to 1024MB?
  13. O

    LXC container memory configuration

    Could somebody elaborate on how to configure the LXC memory settings properly? The UI allows me to set the memory limit of my container to e.g 1024MB and 0MB for swap. According to the documentation this would be wrong: Is the UI missing a sanity check or am i misinterpreting the documentation?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!