For the record, as it is present again in Debian12/Proxmox 8, i just created another bug:
https://bugs.debian.org/cgi-bin/pkgreport.cgi?pkg=ntpsec
2023-12-22T10:46:28.551247+01:00 srv42 kernel: [1569581.071493] audit: type=1400 audit(1703238388.546:160): apparmor="DENIED" operation="mknod"...
Hi Forum,
nach dem Update unserer PM-Knoten auf PM8 (Debian 12, OpenSSL 3.x) wird ein SSL-Zertifikate auf dem System angemeckert.
Pveproxy liest zwar das Zertifikat, aber zeigt im Browser nur noch:
Fehlercode: PR_END_OF_FILE_ERROR
Wir haben in /etc/default/pveproxy
DISABLE_TLS_1_2=1...
Yes. I contacted Intel and they also have no clue, so I just accepted the fact. See:
https://community.intel.com/t5/Ethernet-Products/Intel-XL710-40G-QSFP-AOC-DAC-cables-very-high-latency/m-p/1423968
Hi Forum,
we use latest proxmox 7 and set the bond-mode in the web-interface to broadcast. /etc/network/interfaces contains bond-mode broadcast afterwards.
After rebooting the server, cat /proc/net/bonding/bond0 shows active-backup.
Checking /e/n/interfaces again, bond-mode broadcast is still...
Hi folks,
since years I'm using the following stanza in e/n/i:
auto vmbr0
iface vmbr0 inet manual
bridge-ports bond1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 100
auto vmbr0.100
iface vmbr0.100 inet static
address 192.168.1.11...
Thank you for your time. Unfortunately not. We see those issues since more than 2 years with this specific server. Just installed latest 5.19 and will report back.
Update 05/11/22: Still same errors with 5.19.17-1-pve.
Hi folks,
on a specific Thomas Krenn Server (part of a 3 node cluster with ceph)
Manufacturer: Supermicro
Product Name: H11DSi-NT
Version: 2.00 with dual
AMD EPYC 7301 16-Core Processor
we see strange errors. corosync detects link failure and after that, a SSD reports...
Just for the next lost souls. It was no hardware issue at all for us. Solution for us is here documented:
https://www.cubewerk.de/2022/10/25/vma-restore-failed-short-vma-extent-compressed-data-violation/
Hi Folks,
we have a 3 node cluster with one independent ceph-ring that is directly connected between the 3 nodes. (N1->N2, N2->N3) with QSFP+ AOC-cables¹. Here we have very bad latency on ping tests.
The directly connected setup works flawlessly on other clusters. The only difference is we...
Hi ceph/proxmox experts,
I somehow remove an osd, where a pg was/is still active. How can i get rid of this error? :/
Reduced data availability: 1 pg inactive pg 1.0 is stuck inactive for 5d, current state unknown, last acting []
# ceph pg map 1.0
osdmap e65213 pg 1.0 (1.0) -> up [15,10,5]...
Thank you. I will test this. I was aware that default lacks some native cpu flags, but i could not find a reason why it should not scale on all cores. Will test and report back.
Hi folks,
I'm running a windows server 2019 and doing some benchmark to compress several mp4-video files into a zip-file with windows build in "compress" tool.
Monitoring the cpu usage shows that only a few cores are used and it's pretty slow.
Is this some kind of limitation due to...
Hi folks,
is there a way to have individual pruning settings on a per VM level on server-side?
Our proxmox systems only have backup permissions. No pruning rights.
We want to specify different pruning intervals for each VM.
Thank you.
# ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 239.63440 - 240 TiB 93 TiB 93 TiB 428 MiB 207 GiB 147 TiB 38.77 1.00 - root default
-3...
Hi Forum,
mein HDD-Pool sollte bei einer Gesamtgröße von 213TB und 3 Replicas (default) eine ca. Größe von 70TB haben.
# ceph df detail
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 213 TiB 133 TiB 80 TiB 80 TiB 37.53
nvme 21 TiB 13 TiB 8.2...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.