Recent content by Daniel Keller

  1. D

    Opt-in Linux 6.8 Kernel for Proxmox VE 8 available on test & no-subscription

    I also have BIOS version 1.4 and have also solved it with blacklisting i'm waiting for kernel 6.8.7 which has some bugfixes for bnxt_en before i try a firmware update
  2. D

    Proxmox VE 8.2 released!

    do you have an error form bnxt_en kernel module in your logs? then it could be the same error as here https://forum.proxmox.com/threads/opt-in-linux-6-8-kernel-for-proxmox-ve-8-available-on-test-no-subscription.144557/post-652507 putting the kernel module on the blocklist should help
  3. D

    Opt-in Linux 6.8 Kernel for Proxmox VE 8 available on test & no-subscription

    yes I might try the bios update. the network configuration is just a simple bond over both interfaces auto enp193s0f0np0 iface enp193s0f0np0 inet manual auto enp193s0f1np1 iface enp193s0f1np1 inet manual auto bond0 iface bond0 inet static address -------- gateway --------...
  4. D

    Opt-in Linux 6.8 Kernel for Proxmox VE 8 available on test & no-subscription

    after the update to 6.8 the bnxt_en driver does also not work for me. In this case it is the onboard ports of a SuperMicro H13SSL-NT mainboard and there seems to be no firmware update for them Apr 09 19:55:10 gcd-virthost3 kernel: bnxt_en 0000:c1:00.1: QPLIB: bnxt_re_is_fw_stalled: FW STALL...
  5. D

    Dual Socket terrible performance on some VMs

    run sysbench on your proxmox host with hwloc-bind --single than it will run on one socket only and should perform better sysbench --test=memory --memory-block-size=4G --memory-total-size=32G run WARNING: the --test option is deprecated. You can pass a script name or path on the command line...
  6. D

    Dual Socket terrible performance on some VMs

    if you only need 4 cores in the vm why not use 1 socket 4 cores? 2 sockets are only necessary if you need more cores or memory in a vm than one hardware cpu/socket provides.
  7. D

    Opt-in Linux 6.5 Kernel with ZFS 2.2 for Proxmox VE 8 available on test & no-subscription

    kernel 6.5 works fine except on the one server with Adpatec controller there I get the following error aacraid: Host adapter abort request. aacraid: Outstanding commands on (0,1,17,0): aacraid: Host bus reset request. SCSI hang ? aacraid 0000:18:00.0: Controller reset type is 3 this is...
  8. D

    ceph storage

    As @gurubert said, build a large proxmoxcluster across all 6 nodes. the only factor that determines which nodes are storage nodes is where you install the osd and mon services for ceph
  9. D

    Slow write speeds DRBD

    this is pretty much the speed of 1 Gbit network so i would guess the network connection
  10. D

    Proxmox VE 8.0 (beta) released!

    Hello, Upgrade to Proxmox 8 with hyper-converged ceph worked without an issues. But Ceph shows a Warning: Module 'dashboard' has failed dependency: PyO3 modules may only be initialized once per interpreter process
  11. D

    How to change osd_recovery_max_active in ceph 17.2?

    should be fixed with ceph v17.2.6 https://github.com/ceph/ceph/pull/49437/commits/81c0ca6cdc623278f64efd1daf65887d57ece621
  12. D

    How to change osd_recovery_max_active in ceph 17.2?

    https://docs.ceph.com/en/quincy/rados/configuration/mclock-config-ref/#steps-to-modify-mclock-max-backfills-recovery-limits says I should set "ceph config set osd osd_mclock_override_recovery_settings true" but I can not change the value it returns the error ceph config set osd...
  13. D

    Opt-in Linux 5.19 Kernel for Proxmox VE 7.x available

    you should have no problem my servers were similarly equipped Intel(R) Xeon(R) Silver 4114 and Intel(R) Xeon(R) CPU E5-2640 v3 and if you encounter problems booting just select the old kernel
  14. D

    Opt-in Linux 5.19 Kernel for Proxmox VE 7.x available

    nice with version 5.19 I can again migrate my VMs between my servers without the VMs crashing