Search results

  1. S

    [SOLVED] pg_auscaling adaptiert pg_num nicht (need pg_autoscaler enabled)

    Wow, sehr cool. Genau daran lag es :) Vielen Dank. Das heißt, jetzt ist der autoscaler aktiv und meine manuellen pg_num werden automatisch überschrieben bzw. ignoriert? POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW...
  2. S

    [SOLVED] pg_auscaling adaptiert pg_num nicht (need pg_autoscaler enabled)

    Hab mal alle manager neugestartet, leider kein Erfolg :/ ceph-logs enthalten keine Infos zu "auto oder autoscaler".
  3. S

    [SOLVED] pg_auscaling adaptiert pg_num nicht (need pg_autoscaler enabled)

    # pveversion -v proxmox-ve: 7.1-1 (running kernel: 5.13.19-3-pve) pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe) pve-kernel-helper: 7.1-8 pve-kernel-5.13: 7.1-6 pve-kernel-5.4: 6.4-7 pve-kernel-5.13.19-3-pve: 5.13.19-6 pve-kernel-5.13.19-2-pve: 5.13.19-4 pve-kernel-5.13.19-1-pve...
  4. S

    [SOLVED] pg_auscaling adaptiert pg_num nicht (need pg_autoscaler enabled)

    services: mon: 6 daemons, quorum MCHPLPX01,MCHPLPX02,MCHPLPX03,MCHPLPX04,MCHPLPX05,MCHPLPX07 (age 7h) mgr: MCHPLPX03(active, since 6d), standbys: MCHPLPX04, MCHPLPX01, MCHPLPX02, MCHPLPX07, MCHPLPX05
  5. S

    [SOLVED] pg_auscaling adaptiert pg_num nicht (need pg_autoscaler enabled)

    Danke. Ja ist für alle Pools auf 'on'. { "always_on_modules": [ "balancer", "crash", "devicehealth", "orchestrator", "pg_autoscaler", "progress", "rbd_support", "status", "telemetry", "volumes" # ceph mgr...
  6. S

    [SOLVED] pg_auscaling adaptiert pg_num nicht (need pg_autoscaler enabled)

    Hi Forum, 7 Knoten Ceph-Cluster - letzte 7er-Version. Ein HDD-Pool mit ~ 40 OSDs. Brutto-Gesamtkapazität ~ 250TB. Unter Ceph -> Pools - ist pg_autoscaling angehakt. Dennoch steht bei Optimal # PG - need pg_autoscaler enabled. Auch ein # ceph osd pool autoscale-status liefert keine Ausgabe. Das...
  7. S

    Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

    Here are some observations i've made. Maybe others can relate: After rebooting host1, also host3 looses all of it's links according to KNET. These are independent bonds in my case. The links itself did not go down. I still had running pings over these links. This must be a problem of some...
  8. S

    Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

    Do you only see the error in logs or does your system completely freeze for 20 minutes?
  9. S

    Lost Partition Tables on VMs

    +1 hit me today. Guest was Debian 10 system.
  10. S

    Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

    Looks like this did not finally solve my problem. Anyone else an idea how to fix this permanently?
  11. S

    pvestatd: got timeout - malformed JSON string at perl5/PVE/Ceph/Services.pm

    Hi folks, I'm running a 4 node proxmox cluster with ceph with latest 7.1. No updates available. One node fails to start pvestatd and some other services and ran in a timeout. Dec 05 13:35:04 PX03 systemd[1]: Started PVE Status Daemon. Dec 05 13:37:54 PX03 pvestatd[2055]: got timeout Dec 05...
  12. S

    Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

    Here is my "success" story with this bug. Getting rid of the logging was good but not the solution. I silenced the logs with: auto vmbr0 iface vmbr0 inet manual #iface vmbr0 inet static bridge-ports bond1 bridge-stp off bridge-fd 0 bridge-vlan-aware yes...
  13. S

    Howto set ssl ciphers for PBS web interface? /etc/default/pveproxy is ignored

    Hi, how can one set the SSL ciphers for the web-interface in proxmox backup server? I tried setting /etc/default/pveproxy...
  14. S

    SOLVED: duplicate ceph osd IDs - how to resolve?

    Could solve the problem on my own with: ceph-volume lvm zap /dev/nvme3n1 --destroy fdisk /dev/nvme3n1 (just hit W) And re-add it via proxmox gui again. and finally assign device-class again with ceph osd crush rm-device-class osd.29 ceph osd crush set-device-class nvme osd.29
  15. S

    SOLVED: duplicate ceph osd IDs - how to resolve?

    Dear Proxmox/Ceph-users, i have the strange problem, that two disks seem to use the same osd ID. This is a 3 node proxmox 6 cluster. root@adm-proxmox02:~# ceph-volume lvm list ====== osd.19 ====== [block]...
  16. S

    Corosync strange behaviour

    root@MCHPLPX01:~# corosync-cfgtool -s Local node ID 4, transport knet LINK ID 0 addr = 172.16.1.1 status: nodeid: 1: localhost nodeid: 2: connected nodeid: 3: connected nodeid: 4: connected logging { debug: off to_syslog: yes }...
  17. S

    Corosync strange behaviour

    I have the same 'problem'. Maybe it's just a display issue, as the Proxmox VE Administration Guide notes: "Even if all links are working, only the one with the highest priority will see corosync traffic."
  18. S

    speed up consumer nvme with write through mode?

    Even though I'm well aware that consumer SSD/NVME should not be used, it is reasonable to try to get the most out of "cheap" disks when budget is limited. I made the following observation and would like to discuss the pro/cons of this tunable. Model Number: SAMSUNG...
  19. S

    Proxmox VE Ceph Benchmark 2020/09 - hyper-converged with NVMe

    Thanks for beeing that clear. How can i make sure that a SSD/NVME has a supercapacitor on it?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!