Hi ceph/proxmox experts,
I somehow remove an osd, where a pg was/is still active. How can i get rid of this error? :/
Reduced data availability: 1 pg inactive pg 1.0 is stuck inactive for 5d, current state unknown, last acting []
# ceph pg map 1.0
osdmap e65213 pg 1.0 (1.0) -> up [15,10,5]...
Thank you. I will test this. I was aware that default lacks some native cpu flags, but i could not find a reason why it should not scale on all cores. Will test and report back.
Hi folks,
I'm running a windows server 2019 and doing some benchmark to compress several mp4-video files into a zip-file with windows build in "compress" tool.
Monitoring the cpu usage shows that only a few cores are used and it's pretty slow.
Is this some kind of limitation due to...
Hi folks,
is there a way to have individual pruning settings on a per VM level on server-side?
Our proxmox systems only have backup permissions. No pruning rights.
We want to specify different pruning intervals for each VM.
Thank you.
# ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 239.63440 - 240 TiB 93 TiB 93 TiB 428 MiB 207 GiB 147 TiB 38.77 1.00 - root default
-3...
Hi Forum,
mein HDD-Pool sollte bei einer Gesamtgröße von 213TB und 3 Replicas (default) eine ca. Größe von 70TB haben.
# ceph df detail
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 213 TiB 133 TiB 80 TiB 80 TiB 37.53
nvme 21 TiB 13 TiB 8.2...
Wow, sehr cool. Genau daran lag es :) Vielen Dank. Das heißt, jetzt ist der autoscaler aktiv und meine manuellen pg_num werden automatisch überschrieben bzw. ignoriert?
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW...
Danke. Ja ist für alle Pools auf 'on'.
{
"always_on_modules": [
"balancer",
"crash",
"devicehealth",
"orchestrator",
"pg_autoscaler",
"progress",
"rbd_support",
"status",
"telemetry",
"volumes"
# ceph mgr...
Hi Forum,
7 Knoten Ceph-Cluster - letzte 7er-Version. Ein HDD-Pool mit ~ 40 OSDs. Brutto-Gesamtkapazität ~ 250TB.
Unter Ceph -> Pools - ist pg_autoscaling angehakt.
Dennoch steht bei Optimal # PG - need pg_autoscaler enabled.
Auch ein # ceph osd pool autoscale-status
liefert keine Ausgabe.
Das...
Here are some observations i've made. Maybe others can relate:
After rebooting host1, also host3 looses all of it's links according to KNET. These are independent bonds in my case. The links itself did not go down. I still had running pings over these links. This must be a problem of some...
Hi folks,
I'm running a 4 node proxmox cluster with ceph with latest 7.1. No updates available.
One node fails to start pvestatd and some other services and ran in a timeout.
Dec 05 13:35:04 PX03 systemd[1]: Started PVE Status Daemon.
Dec 05 13:37:54 PX03 pvestatd[2055]: got timeout
Dec 05...
Here is my "success" story with this bug.
Getting rid of the logging was good but not the solution. I silenced the logs with:
auto vmbr0
iface vmbr0 inet manual
#iface vmbr0 inet static
bridge-ports bond1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes...