Wow, sehr cool. Genau daran lag es :) Vielen Dank. Das heißt, jetzt ist der autoscaler aktiv und meine manuellen pg_num werden automatisch überschrieben bzw. ignoriert?
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW...
Danke. Ja ist für alle Pools auf 'on'.
{
"always_on_modules": [
"balancer",
"crash",
"devicehealth",
"orchestrator",
"pg_autoscaler",
"progress",
"rbd_support",
"status",
"telemetry",
"volumes"
# ceph mgr...
Hi Forum,
7 Knoten Ceph-Cluster - letzte 7er-Version. Ein HDD-Pool mit ~ 40 OSDs. Brutto-Gesamtkapazität ~ 250TB.
Unter Ceph -> Pools - ist pg_autoscaling angehakt.
Dennoch steht bei Optimal # PG - need pg_autoscaler enabled.
Auch ein # ceph osd pool autoscale-status
liefert keine Ausgabe.
Das...
Here are some observations i've made. Maybe others can relate:
After rebooting host1, also host3 looses all of it's links according to KNET. These are independent bonds in my case. The links itself did not go down. I still had running pings over these links. This must be a problem of some...
Hi folks,
I'm running a 4 node proxmox cluster with ceph with latest 7.1. No updates available.
One node fails to start pvestatd and some other services and ran in a timeout.
Dec 05 13:35:04 PX03 systemd[1]: Started PVE Status Daemon.
Dec 05 13:37:54 PX03 pvestatd[2055]: got timeout
Dec 05...
Here is my "success" story with this bug.
Getting rid of the logging was good but not the solution. I silenced the logs with:
auto vmbr0
iface vmbr0 inet manual
#iface vmbr0 inet static
bridge-ports bond1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes...
Could solve the problem on my own with:
ceph-volume lvm zap /dev/nvme3n1 --destroy
fdisk /dev/nvme3n1 (just hit W)
And re-add it via proxmox gui again.
and finally assign device-class again with
ceph osd crush rm-device-class osd.29
ceph osd crush set-device-class nvme osd.29
Dear Proxmox/Ceph-users,
i have the strange problem, that two disks seem to use the same osd ID. This is a 3 node proxmox 6 cluster.
root@adm-proxmox02:~# ceph-volume lvm list
====== osd.19 ======
[block]...
I have the same 'problem'. Maybe it's just a display issue, as the Proxmox VE Administration Guide notes:
"Even if all links are working, only the one with the highest priority will see corosync traffic."
Even though I'm well aware that consumer SSD/NVME should not be used, it is reasonable to try to get the most out of "cheap" disks when budget is limited. I made the following observation and would like to discuss the pro/cons of this tunable.
Model Number: SAMSUNG...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.