Recent content by bizzarrone

  1. Ceph down if one node is down

    I definitely solved the problem, really thank you Jarek!
  2. Ceph down if one node is down

    Thank you Jarek for your time and your reply. I really need to read a manual on Ceph. I changed the setting: In 2 hours I think the redundancy will be over, then I will try a new test. I was desperate, even I thought to rollback to version 4.. Thanks again. Luca
  3. Ceph down if one node is down

    Good morning, I added a new powerfull node full of disks, removed the old one with few osd. Nothing changed. As I stop 1 OSD, the ceph pool freezes. The log: 2019-02-20 09:00:00.000189 mon.bluehub-prox01 mon.0 10.9.9.1:6789/0 82642 : cluster [INF] overall HEALTH_OK 2019-02-20 09:12:05.253331...
  4. Ceph down if one node is down

    Thank you Alex, How could I fix the situation?
  5. Ceph down if one node is down

    Good morning Jarek, thank you for your advice. Here it is: # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable chooseleaf_vary_r 1 tunable chooseleaf_stable 1 tunable straw_calc_version 1...
  6. Ceph down if one node is down

    I masked systemd-timesyncd and installet ntpd. No skew detected, anyway, same issue.. NO I/O, everything is blocked. no info from OSD logs.. I think I will rollback to proxmox version 4 or I will switch from ceph to another shared disk
  7. Ceph down if one node is down

    Good morning, today I performed a new test. 2019-02-06 06:26:18.797253 mon.bluehub-prox02 mon.0 10.9.9.2:6789/0 33445 : cluster [WRN] Health check update: 35/1833259 objects misplaced (0.002%) (OBJECT_MISPLACED) 2019-02-06 06:27:20.872436 mon.bluehub-prox02 mon.0 10.9.9.2:6789/0 33449 ...
  8. Ceph down if one node is down

    I just read in some other threads. I will try a new reboot and check the logs of OSD
  9. Ceph down if one node is down

    Thank you Alwin, I am using timesyncd instead of ntpd and an internal ntp server for the datacentre.. Could it be a solution to switch to ntpd instead?
  10. Ceph down if one node is down

    Good evening, after some tests, I discovered that if 1 of 4 nodes goes down, the disk IO stucks. VM and CT are still up but no disk of them are available for I/O. I have 3 ceph monitors. When I reboot the node, on ceph logs: 2019-01-24 10:28:08.240463 mon.bluehub-prox02 mon.0 10.9.9.2:6789/0...
  11. [SOLVED] Proxmox VE cluster node shows unknown

    Here the instructions hot to use unicast. Then reboot every 3 nodes. It works now. https://pve.proxmox.com/wiki/Multicast_notes#Use_unicast_.28UDPU.29_instead_of_multicast.2C_if_all_else_fails
  12. [SOLVED] Proxmox VE cluster node shows unknown

    Good morning again. I discovered the multicast traffic is blocked after 2 minutes... prox03 : multicast, seq=180, size=69 bytes, dist=0, time=0.385ms prox02 : multicast, seq=180, size=69 bytes, dist=0, time=0.421ms prox03 : unicast, seq=181, size=69 bytes, dist=0, time=0.222ms prox02 ...
  13. [SOLVED] Proxmox VE cluster node shows unknown

    Good morning. Just installed proxmox 5.3, updated just with Debian updates. No pve-nosubs repository. I created a a 3 nodes cluster. I am using a separate network for the cluster 10 GB. hosts file correct. Immediatly after creation of the cluster: Quorum information ------------------ Date...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!