Search results

  1. zeha

    ACME missing in Datacenter view

    https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_certs_acme_plugins implies there should be an "ACME" section in the Datacenter settings. Looks like on 8.2.7 this went missing. I'm pretty sure I configured plugins in the web interface before, but it seems gone now. Maybe this is...
  2. zeha

    System hanging after upgrade...NIC driver?

    Chiming in here because I have the same problem on 6.2.16-3-pve: Linux las-vh01 6.2.16-3-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.16-3 (2023-06-17T05:58Z) x86_64 GNU/Linux 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05) Edit: worked just fine on...
  3. zeha

    PVE system process memory usage

    It's slowly starting to swap stuff out, sure. But thats not really the point. A "fresh small install" PVE 7.3 now uses ~1.6GB before any VMs are running. That's just a lot, and a lot more than the allocation guidelines suggest.
  4. zeha

    PVE system process memory usage

    Also turned off spiceproxy now, to save another 45M :-)
  5. zeha

    PVE system process memory usage

    Turned off pve-ha-crm and pve-ha-lrm now, thanks! There was no "actual question" except an implied "does someone who knows the code want to look at it?".
  6. zeha

    PVE system process memory usage

    Hi, I'm running PVE on RAM-constrained hardware - just 4GB, non-upgradable. Naturally, I want most of the RAM to be used for (qemu) VMs. Judging from top(1) output, these pve processes seem to use "lots" of RES memory, while maybe not doing that much with only 2 VMs setup? PID USER PR...
  7. zeha

    Exposing disk with multiple paths?

    Right. I do have such an iSCSI setup, but if I don’t have to use it, it’s better for me. (And FC can die in a lonely corner for all I care …)
  8. zeha

    Exposing disk with multiple paths?

    That works, indeed: scsi1: pve_disk_02:vm-113-disk-1,cache=none,serial=YYYYYNN,wwn=0x5001438032b17f50,size=8G scsi2: pve_disk_02:vm-113-disk-1,cache=none,serial=YYYYYNN,wwn=0x5001438032b17f50,size=8G Plus, for VirtIO SCSI, add boot param scsi_mod.default_dev_flags=0x10000000 to force VPD...
  9. zeha

    Exposing disk with multiple paths?

    I would like to test dm-multipath setups inside VMs. Is there a way to expose a scsiN device as multiple paths/devices? Unsupported hacks would be good enough, obviously :-) Chris
  10. zeha

    Proxmox 6.2 LPFC error port type wrong

    Hi Thomas, reverting this commit makes lpfc login properly: commit 77d5805eafdb5c42bdfe78f058ad9c40ee1278b4 Author: James Smart <jsmart2021@gmail.com> Date: Mon Jan 27 16:23:03 2020 -0800 scsi: lpfc: Fix broken Credit Recovery after driver load
  11. zeha

    Hohe load/Kernel Oops/Reboot unmöglich

    Hatten heute wieder so ein Problem, diesmal hab ich ein pveversion -v dazu: [303255.229324] BUG: kernel NULL pointer dereference, address: 0000000000000014 [303255.229373] #PF: supervisor read access in kernel mode [303255.229388] #PF: error_code(0x0000) - not-present page [303255.229401] PGD 0...
  12. zeha

    Hohe load/Kernel Oops/Reboot unmöglich

    Solltet ihr den Thread im englischen Forum noch nicht gesehen haben, da sind noch ein paar mehr: https://forum.proxmox.com/threads/kernel-oops-with-kworker-getting-tainted.63116/
  13. zeha

    Hohe load/Kernel Oops/Reboot unmöglich

    Hallo @wolfgang, Hardware sind HPE ProLiant DL380 Gen10 server, mit Xeon(R) Gold 6142M CPUs, kernel ist 5.3.13-1-pve. PVE Version 6.1-5. Chris.
  14. zeha

    Hohe load/Kernel Oops/Reboot unmöglich

    Wir sehen auf mehreren Systemen (mit akt. HW) und 5.3.13-1-pve den gleichen Fehler. Ist mit 5.0.18-1-pve nicht aufgefallen, kann leider nicht viel zu den Versionen dazwischen sagen.
  15. zeha

    Kernel Oops with kworker getting tainted.

    Just to add in to this, we also see this. However, we also see hanging `corosync-quorum` processes, like this: root 48408 0.0 0.0 0 0 ? D Jan25 0:00 [corosync-quorum] root 48454 0.0 0.0 0 0 ? D 04:25 0:00 [corosync-quorum] root 48559...
  16. zeha

    PVE6 slab cache grows until VMs start to crash

    Indeed, this also happens on non-PVE machines. Probably combined with the generally more powerful hardware of the PVE hosts it's really only noticeable there. On a non-PVE host, I see this: ● system-check_mk.slice Loaded: loaded Active: active since Tue 2019-10-08 19:50:50 CEST; 4 days...
  17. zeha

    PVE6 slab cache grows until VMs start to crash

    That also appears to solve the problem. Bit of a meh "solution" though.
  18. zeha

    PVE6 slab cache grows until VMs start to crash

    For now I can report this: On one site I've switched the misbehaving machine from check_mk.socket to xinetd (because the rest of the fleet there is set up like that), and the problem is gone. I'll try Type=forking soon.
  19. zeha

    PVE6 slab cache grows until VMs start to crash

    BTW, even better: systemctl stop system-check_mk.slice results in the memory getting freed: No useless reboots <3