Search results

  1. LXC containers shows host's load average

    Actually I think I'll send a patch but I have no clue if it'll be accepted.
  2. LXC containers shows host's load average

    In the bugzilla issue they didn't fix anything, just closed it and provided a workaround, which in my view is not optimal. Not sure where the change should go, but there should be a /etc/default/lxcfs with a LXCFS_OPTS= var to able to specify whether you want this enabled or not. I understand...
  3. PVE cluster on remote nodes with tinc unusable

    Can't you just use a vswitch then?
  4. 12€ Hyperconverged ProxMox 7 Cloud Cluster with 1TB storage

    Yeah, I'm doing something similar right now, I might share my setup too!
  5. 12€ Hyperconverged ProxMox 7 Cloud Cluster with 1TB storage

    What are those compute nodes for 1eur a month?
  6. unexpected restart of all cluster nodes

    Thanks @fabian, I've read the corosync github issue, looked quited involved :) Good job!
  7. LXC with fuse-overlayfs won't replicate or backup (hangs at freeze)

    Is there a solution for this? If you're using ZFS and with aufs gone, your only choice seems to use fuse-overlayfs but then you can't backup the LXC or use ZFS replication, can you even migrate LXCs?
  8. DiskIO in CT missing

    Same here: Version: root@pve02:~# pveversion pve-manager/6.3-6/2184247e (running kernel: 5.4.106-1-pve)
  9. [SOLVED] CTs used memory keeps growing until full

    Hi all, We eventually found what was causing the leak and it was not in proxmox. It was a bug in a library (nss-softokn) in the Centos7 CTs, which is fixed by upgrading that library to a newer version, see the relevant commit: Bug 1603801 [patch] Avoid dcache pollution from sdb_measureAccess()...
  10. [SOLVED] CTs used memory keeps growing until full

    The journal size is pretty small tbh: root@pmxc-12:~# journalctl --disk-usage Archived and active journals take up 24.0M in the file system. By the way, the OOM killer is never triggered because the CT gets almost up to its mem limit. 97% or so.
  11. [SOLVED] CTs used memory keeps growing until full

    Thanks @oguz, if you need any more information let me know. I'll keep investigating on my side but at this point I'm out of ideas.
  12. [SOLVED] CTs used memory keeps growing until full

    More information, issues on another CT for which the host we upgraded. Detailed pveversion: proxmox-ve: 6.3-1 (running kernel: 5.4.106-1-pve) pve-manager: 6.3-6 (running version: 6.3-6/2184247e) pve-kernel-5.4: 6.3-8 pve-kernel-helper: 6.3-8 pve-kernel-5.0: 6.0-11 pve-kernel-5.4.106-1-pve...
  13. [SOLVED] CTs used memory keeps growing until full

    We are still puzzled by this issue and we haven't found the issue yet. More information: There have been no recent changes to that server that would explain that kind of memory usage increase. Other things we have observed: - Running kernel 5.4.106-1-pve and reverting packages lxc-pve and...
  14. [SOLVED] Remove disk from ZFS pool

    Eventually I removed the disk from the pool and then, with the remark from @avw, I could attach it as mirror: pool: rpool state: ONLINE scan: scrub repaired 0B in 07:22:19 with 0 errors on Sun Apr 11 07:46:20 2021 remove: Removal of vdev 1 copied 415G in 1h0m, completed on Mon Apr 19...
  15. [SOLVED] Remove disk from ZFS pool

    Yes, I'm using GRUB to boot. I've upgraded proxmox but I'm afraid to do the 'zpool remove' now. I'll try to backup everything first, then I'll try.
  16. [SOLVED] CTs used memory keeps growing until full

    zabbix-proxy and salt-master. The image used for both is centos-7. The zabbix-proxy one: arch: amd64 cores: 2 hostname: zabbix-proxy.mysite memory: 12288 nameserver: net0: name=eth0,bridge=vmbr0,gw=,hwaddr=xx:xx:xx:xx:xx,ip=,tag=11,type=veth onboot: 1...
  17. [SOLVED] CTs used memory keeps growing until full

    Hi, We have detected a strange (or not well-understood) behavior in the memory usage of, at least, two containers but we believe it's a generalized issue. After a CT restart the memory usage keeps steadily growing until, after a couple days, it reaches around 96-99% of the total assigned...
  18. [SOLVED] Remove disk from ZFS pool

    Yes, I also tried that and didn't work: root@pve01:~# zpool detach rpool wwn-0x5000c500b00df01a-part3 cannot detach wwn-0x5000c500b00df01a-part3: only applicable to mirror and replacing vdevs Proxmox version and zfs versions: root@pve01:~# zfs version zfs-0.8.5-pve1 zfs-kmod-0.8.5-pve1...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!