Search results

  1. S

    ceph - 1 pg inactive - current state unknown, last acting []

    Hi ceph/proxmox experts, I somehow remove an osd, where a pg was/is still active. How can i get rid of this error? :/ Reduced data availability: 1 pg inactive pg 1.0 is stuck inactive for 5d, current state unknown, last acting [] # ceph pg map 1.0 osdmap e65213 pg 1.0 (1.0) -> up [15,10,5]...
  2. S

    proxmox - server 2019 - not all cpu cores are used during compression

    Thank you. I will test this. I was aware that default lacks some native cpu flags, but i could not find a reason why it should not scale on all cores. Will test and report back.
  3. S

    proxmox - server 2019 - not all cpu cores are used during compression

    Hi folks, I'm running a windows server 2019 and doing some benchmark to compress several mp4-video files into a zip-file with windows build in "compress" tool. Monitoring the cpu usage shows that only a few cores are used and it's pretty slow. Is this some kind of limitation due to...
  4. S

    server side pruning settings per VM possible?

    Hi folks, is there a way to have individual pruning settings on a per VM level on server-side? Our proxmox systems only have backup permissions. No pruning rights. We want to specify different pruning intervals for each VM. Thank you.
  5. S

    ceph rbd pool zeigt falsche Gesamtgröße - fehlender Speicher

    # ceph osd df tree ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -1 239.63440 - 240 TiB 93 TiB 93 TiB 428 MiB 207 GiB 147 TiB 38.77 1.00 - root default -3...
  6. S

    ceph rbd pool zeigt falsche Gesamtgröße - fehlender Speicher

    Hi Forum, mein HDD-Pool sollte bei einer Gesamtgröße von 213TB und 3 Replicas (default) eine ca. Größe von 70TB haben. # ceph df detail --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 213 TiB 133 TiB 80 TiB 80 TiB 37.53 nvme 21 TiB 13 TiB 8.2...
  7. S

    [SOLVED] pg_auscaling adaptiert pg_num nicht (need pg_autoscaler enabled)

    Wow, sehr cool. Genau daran lag es :) Vielen Dank. Das heißt, jetzt ist der autoscaler aktiv und meine manuellen pg_num werden automatisch überschrieben bzw. ignoriert? POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW...
  8. S

    [SOLVED] pg_auscaling adaptiert pg_num nicht (need pg_autoscaler enabled)

    Hab mal alle manager neugestartet, leider kein Erfolg :/ ceph-logs enthalten keine Infos zu "auto oder autoscaler".
  9. S

    [SOLVED] pg_auscaling adaptiert pg_num nicht (need pg_autoscaler enabled)

    # pveversion -v proxmox-ve: 7.1-1 (running kernel: 5.13.19-3-pve) pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe) pve-kernel-helper: 7.1-8 pve-kernel-5.13: 7.1-6 pve-kernel-5.4: 6.4-7 pve-kernel-5.13.19-3-pve: 5.13.19-6 pve-kernel-5.13.19-2-pve: 5.13.19-4 pve-kernel-5.13.19-1-pve...
  10. S

    [SOLVED] pg_auscaling adaptiert pg_num nicht (need pg_autoscaler enabled)

    services: mon: 6 daemons, quorum MCHPLPX01,MCHPLPX02,MCHPLPX03,MCHPLPX04,MCHPLPX05,MCHPLPX07 (age 7h) mgr: MCHPLPX03(active, since 6d), standbys: MCHPLPX04, MCHPLPX01, MCHPLPX02, MCHPLPX07, MCHPLPX05
  11. S

    [SOLVED] pg_auscaling adaptiert pg_num nicht (need pg_autoscaler enabled)

    Danke. Ja ist für alle Pools auf 'on'. { "always_on_modules": [ "balancer", "crash", "devicehealth", "orchestrator", "pg_autoscaler", "progress", "rbd_support", "status", "telemetry", "volumes" # ceph mgr...
  12. S

    [SOLVED] pg_auscaling adaptiert pg_num nicht (need pg_autoscaler enabled)

    Hi Forum, 7 Knoten Ceph-Cluster - letzte 7er-Version. Ein HDD-Pool mit ~ 40 OSDs. Brutto-Gesamtkapazität ~ 250TB. Unter Ceph -> Pools - ist pg_autoscaling angehakt. Dennoch steht bei Optimal # PG - need pg_autoscaler enabled. Auch ein # ceph osd pool autoscale-status liefert keine Ausgabe. Das...
  13. S

    Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

    Here are some observations i've made. Maybe others can relate: After rebooting host1, also host3 looses all of it's links according to KNET. These are independent bonds in my case. The links itself did not go down. I still had running pings over these links. This must be a problem of some...
  14. S

    Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

    Do you only see the error in logs or does your system completely freeze for 20 minutes?
  15. S

    Lost Partition Tables on VMs

    +1 hit me today. Guest was Debian 10 system.
  16. S

    Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

    Looks like this did not finally solve my problem. Anyone else an idea how to fix this permanently?
  17. S

    pvestatd: got timeout - malformed JSON string at perl5/PVE/Ceph/Services.pm

    Hi folks, I'm running a 4 node proxmox cluster with ceph with latest 7.1. No updates available. One node fails to start pvestatd and some other services and ran in a timeout. Dec 05 13:35:04 PX03 systemd[1]: Started PVE Status Daemon. Dec 05 13:37:54 PX03 pvestatd[2055]: got timeout Dec 05...
  18. S

    Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

    Here is my "success" story with this bug. Getting rid of the logging was good but not the solution. I silenced the logs with: auto vmbr0 iface vmbr0 inet manual #iface vmbr0 inet static bridge-ports bond1 bridge-stp off bridge-fd 0 bridge-vlan-aware yes...
  19. S

    Howto set ssl ciphers for PBS web interface? /etc/default/pveproxy is ignored

    Hi, how can one set the SSL ciphers for the web-interface in proxmox backup server? I tried setting /etc/default/pveproxy...