For the record, as it is present again in Debian12/Proxmox 8, i just created another bug:
https://bugs.debian.org/cgi-bin/pkgreport.cgi?pkg=ntpsec
2023-12-22T10:46:28.551247+01:00 srv42 kernel: [1569581.071493] audit: type=1400 audit(1703238388.546:160): apparmor="DENIED" operation="mknod"...
Hi Forum,
nach dem Update unserer PM-Knoten auf PM8 (Debian 12, OpenSSL 3.x) wird ein SSL-Zertifikate auf dem System angemeckert.
Pveproxy liest zwar das Zertifikat, aber zeigt im Browser nur noch:
Fehlercode: PR_END_OF_FILE_ERROR
Wir haben in /etc/default/pveproxy
DISABLE_TLS_1_2=1...
Yes. I contacted Intel and they also have no clue, so I just accepted the fact. See:
https://community.intel.com/t5/Ethernet-Products/Intel-XL710-40G-QSFP-AOC-DAC-cables-very-high-latency/m-p/1423968
Hi Forum,
we use latest proxmox 7 and set the bond-mode in the web-interface to broadcast. /etc/network/interfaces contains bond-mode broadcast afterwards.
After rebooting the server, cat /proc/net/bonding/bond0 shows active-backup.
Checking /e/n/interfaces again, bond-mode broadcast is still...
Hi folks,
since years I'm using the following stanza in e/n/i:
auto vmbr0
iface vmbr0 inet manual
bridge-ports bond1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 100
auto vmbr0.100
iface vmbr0.100 inet static
address 192.168.1.11...
Thank you for your time. Unfortunately not. We see those issues since more than 2 years with this specific server. Just installed latest 5.19 and will report back.
Update 05/11/22: Still same errors with 5.19.17-1-pve.
Hi folks,
on a specific Thomas Krenn Server (part of a 3 node cluster with ceph)
Manufacturer: Supermicro
Product Name: H11DSi-NT
Version: 2.00 with dual
AMD EPYC 7301 16-Core Processor
we see strange errors. corosync detects link failure and after that, a SSD reports...
Just for the next lost souls. It was no hardware issue at all for us. Solution for us is here documented:
https://www.cubewerk.de/2022/10/25/vma-restore-failed-short-vma-extent-compressed-data-violation/
Hi Folks,
we have a 3 node cluster with one independent ceph-ring that is directly connected between the 3 nodes. (N1->N2, N2->N3) with QSFP+ AOC-cables¹. Here we have very bad latency on ping tests.
The directly connected setup works flawlessly on other clusters. The only difference is we...
Hi ceph/proxmox experts,
I somehow remove an osd, where a pg was/is still active. How can i get rid of this error? :/
Reduced data availability: 1 pg inactive pg 1.0 is stuck inactive for 5d, current state unknown, last acting []
# ceph pg map 1.0
osdmap e65213 pg 1.0 (1.0) -> up [15,10,5]...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.