Search results

  1. F

    Besserer OSD Ausgleich

    Hi, du kannst den balancer dafür aktivieren. https://docs.ceph.com/docs/mimic/mgr/balancer/ Gruß
  2. F

    Proxtalks 2019 in Frankfurt

    Quelle: https://blog.proxtalks.de/
  3. F

    Proxtalks 2019 in Frankfurt

    Hast Du mit den Leute von Stacktrace gesprochen?
  4. F

    Proxtalks 2019 Videos?

    Hier als Beispiel für 2018 https://proxtalks.de/vortraege2018/ Es liegt immer daran, ob der Referent auch die Präsi veröffentlichen möchte.
  5. F

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    Yeah, we did. for one week there have been no more failures.
  6. F

    Proxtalks 2019 in Frankfurt

    Ich bin dabei :)
  7. F

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    https://github.com/kronosnet/kronosnet/commit/1338058fa634b08eee7099c0614e8076267501ff And here is the new package http://download.proxmox.com/debian/dists/buster/pvetest/binary-amd64/libknet-dev_1.12-pve2_amd64.deb
  8. F

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    dpkg -l |grep knet ii libknet1:amd64 1.11-pve2 amd64 kronosnet core switching implementation A simple restart of corosync and the cluster was quorate.
  9. F

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    Great, frist morning without a broken cluster Votequorum information ---------------------- Expected votes: 7 Highest expected: 7 Total votes: 7 Quorum: 4 Flags: Quorate
  10. F

    Ceph SSD for wal

    Thx, but that a very old devices. For rocket.db you got 10% space default from the OSD, it can get very big pretty fast if you work with 4 TB hard disks. :-)
  11. F

    Thin provision ceph

    Hi Jasper, ceph is thin provisioned. You can check this with the following command:
  12. F

    Ceph SSD for wal

    I'm looking for a good SSD for to outsource the WAL for my OSDs. Which SSDs can you recommend? I had already looked for the following models: SM883 Intel D3 S4610 series best regards
  13. F

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    I got a coredump of a signal 11. I'm not allowed to upload lz4 files, so i put the output here PID: 27355 (corosync) UID: 0 (root) GID: 0 (root) Signal: 11 (SEGV) Timestamp: Sun 2019-09-01 04:12:24 CEST (11h ago) Command Line: /usr/sbin/corosync...
  14. F

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    Hey spirit, there was no other entries in /var/log/kernel.log or dmesg I have installed systemd-coredump
  15. F

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    I got for 2 nodes 11/SEGV Aug 28 09:55:48 node-29 corosync[1684]: [TOTEM ] Retransmit List: f2512 f2513 f2514 Aug 28 10:06:54 node-29 corosync[1684]: [TOTEM ] Retransmit List: f390c Aug 28 18:19:24 node-29 systemd[1]: corosync.service: Main process exited, code=killed, status=11/SEGV Aug 28...
  16. F

    [SOLVED] Cluster Fails after one Day - PVE 6.0.4

    Hello, we still got problems with corosync: 192.168.131.20 ii libknet1:amd64 1.10-pve2~test1 amd64 kronosnet core switching implementation 192.168.131.21 ii libknet1:amd64 1.10-pve2~test1 amd64...
  17. F

    [SOLVED] Cluster Fails after one Day - PVE 6.0.4

    Here's exactly the same thing. A cluster of 7 nodes has been upgraded from 5.4 to 6.0.2 and every morning after backup the pvecm status is no longer correct. pveversion -v proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve) pve-manager: 6.0-4 (running version: 6.0-4/2a719255) pve-kernel-5.0: 6.0-5...
  18. F

    Proxmox und Cockpit

    Was ist genau das Problem?
  19. F

    Proxmox und Cockpit

    Vielleicht wäre das wesent wichtigere Schritt im Moment, anstatt hier Zip-Files hochzuladen.
  20. F

    Proxmox und Cockpit

    Gibt es eine weitere Entwicklung?