Search results

  1. M

    kernel: libceph: socket closed (con state OPEN)

    Hi lucavornheder No latency 0/0 or 1/1 I disable KRBD and restart all vm. I not receice more logs I will wait this weak for notices.
  2. M

    kernel: libceph: socket closed (con state OPEN)

    Hi After change HDD to SSD i have this messages Apr 24 17:30:53 sr1 kernel: libceph: osd5 (1)192.168.10.201:6812 socket closed (con state OPEN) Apr 24 17:30:53 sr1 kernel: libceph: osd5 (1)192.168.10.201:6812 socket closed (con state OPEN) Apr 24 17:30:53 sr1 kernel: libceph: osd5...
  3. M

    libceph socket closed

    Hi I have same problem Apr 24 17:26:10 sr1 kernel: libceph: osd5 (1)192.168.10.201:6812 socket closed (con state OPEN) Apr 24 17:26:10 sr1 kernel: libceph: osd2 (1)192.168.10.203:6815 socket closed (con state OPEN) Apr 24 17:26:10 sr1 kernel: libceph: osd2 (1)192.168.10.203:6815 socket closed...
  4. M

    Ceph 19.2.0 does not distribute PG equally across OSDs

    After change target ratio to 1 I set target_max_misplaced_ratio 0.01 PG go to 512 But not balancer
  5. M

    Ceph 19.2.0 does not distribute PG equally across OSDs

    I resolved the balancing by doing the following commands, I just didn't understand why Ceph messed up the pgs right at installation ceph osd getmap -o om osdmaptool om --upmap out.txt --upmap-pool ceph --upmap-max 4 --upmap-deviation 1 --upmap-active source out.txt Just be careful and run the...
  6. M

    Ceph 19.2.0 does not distribute PG equally across OSDs

    Hello everyone. New installation, with all nodes with the same hardware configurations, A simple VM with Debian and even so the distribution does not occur in the same way. Is this correct?
  7. M

    Connection refused from 2 of 4 nodes on a cluster

    Hello. You can adjust your cluster to be able to manage it pvecm expected 2 Yes, after the nodes communicate, they synchronize, just upload the networks to do so. Climb one at a time.
  8. M

    [SOLVED] VM filesystems all broken after cluster node crashed

    Hello, I have noticed that when a single node is running proxmox restarts itself. But when he is part of a group this does not happen. This is very strange, but it happens to me too and I don't know how to solve it because it happens very sporadically.
  9. M

    After updating ceph 18.2.2 each osds never start

    It is safe to execute the commands ceph osd rm-pg-upmap-primary ceph osd rm-pg-upmap-items In a production environment, I mean won't you remove the cluster data from the virtual machines?
  10. M

    Ceph Slow Ops if one node is rebooting (Proxmox 7.0-14 Ceph 16.2.6)

    Solved !!!! Too Thank you so much, that's the problem. With routed mesh network it is working like before! I don't know what are the changes between proxmox 6.4-x and proxmox 7.0-x for ceph and broadcast network. but now it works fine :)
  11. M

    Ceph Slow Ops if one node is rebooting (Proxmox 7.0-14 Ceph 16.2.6)

    Hello, I have the same problem, can you give me the solution, how was your network configuration file?
  12. M

    Broadcast mode does not work with 5.11 kernel

    I have clusters with 3 hosts. All are directly connected with 6 NICs 2 on each host. When I use kernel 5.4 everything ok, but when I use kernel 5.11 I get the following problem: If all hosts are turned on everything works fine, but if I disconnect a cable from any of them, I simply lose Quorum...