I also have BIOS version 1.4 and have also solved it with blacklisting
i'm waiting for kernel 6.8.7 which has some bugfixes for bnxt_en before i try a firmware update
do you have an error form bnxt_en kernel module in your logs? then it could be the same error as here
https://forum.proxmox.com/threads/opt-in-linux-6-8-kernel-for-proxmox-ve-8-available-on-test-no-subscription.144557/post-652507
putting the kernel module on the blocklist should help
yes I might try the bios update.
the network configuration is just a simple bond over both interfaces
auto enp193s0f0np0
iface enp193s0f0np0 inet manual
auto enp193s0f1np1
iface enp193s0f1np1 inet manual
auto bond0
iface bond0 inet static
address --------
gateway --------...
after the update to 6.8 the bnxt_en driver does also not work for me.
In this case it is the onboard ports of a SuperMicro H13SSL-NT mainboard and there seems to be no firmware update for them
Apr 09 19:55:10 gcd-virthost3 kernel: bnxt_en 0000:c1:00.1: QPLIB: bnxt_re_is_fw_stalled: FW STALL...
run sysbench on your proxmox host with hwloc-bind --single than it will run on one socket only and should perform better
sysbench --test=memory --memory-block-size=4G --memory-total-size=32G run
WARNING: the --test option is deprecated. You can pass a script name or path on the command line...
if you only need 4 cores in the vm why not use 1 socket 4 cores?
2 sockets are only necessary if you need more cores or memory in a vm than one hardware cpu/socket provides.
kernel 6.5 works fine except on the one server with Adpatec controller there I get the following error
aacraid: Host adapter abort request.
aacraid: Outstanding commands on (0,1,17,0):
aacraid: Host bus reset request. SCSI hang ?
aacraid 0000:18:00.0: Controller reset type is 3
this is...
As @gurubert said, build a large proxmoxcluster across all 6 nodes.
the only factor that determines which nodes are storage nodes is where you install the osd and mon services for ceph
Hello,
Upgrade to Proxmox 8 with hyper-converged ceph worked without an issues.
But Ceph shows a Warning:
Module 'dashboard' has failed dependency: PyO3 modules may only be initialized once per interpreter process
https://docs.ceph.com/en/quincy/rados/configuration/mclock-config-ref/#steps-to-modify-mclock-max-backfills-recovery-limits
says I should set "ceph config set osd osd_mclock_override_recovery_settings true"
but I can not change the value it returns the error
ceph config set osd...
you should have no problem my servers were similarly equipped Intel(R) Xeon(R) Silver 4114 and Intel(R) Xeon(R) CPU E5-2640 v3
and if you encounter problems booting just select the old kernel
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.