Separate names with a comma.
I definitely solved the problem, really thank you Jarek!
Thank you Jarek for your time and your reply. I really need to read a manual on Ceph.
I changed the setting: In 2 hours I think the redundancy...
I added a new powerfull node full of disks, removed the old one with few osd.
As I stop 1 OSD, the ceph pool...
Thank you Alex,
How could I fix the situation?
Good morning Jarek,
thank you for your advice.
Here it is:
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0...
I masked systemd-timesyncd and installet ntpd.
No skew detected, anyway, same issue..
NO I/O, everything is blocked.
no info from OSD logs.. I...
today I performed a new test.
2019-02-06 06:26:18.797253 mon.bluehub-prox02 mon.0 10.9.9.2:6789/0 33445 : cluster [WRN] Health...
I just read in some other threads.
I will try a new reboot and check the logs of OSD
Thank you Alwin,
I am using timesyncd instead of ntpd and an internal ntp server for the datacentre..
Could it be a solution to switch to ntpd...
after some tests, I discovered that if 1 of 4 nodes goes down, the disk IO stucks.
VM and CT are still up but no disk of them are...
Same happen to me. same configuration
Here the instructions hot to use unicast.
Then reboot every 3 nodes. It works now....
Good morning again.
I discovered the multicast traffic is blocked after 2 minutes...
prox03 : multicast, seq=180, size=69 bytes, dist=0,...
Just installed proxmox 5.3, updated just with Debian updates. No pve-nosubs repository. I created a a 3 nodes cluster.
I am using a...
me the same
I have the same error. version 5.2
Thank you Tom for reply. Today I did some tests.
It completely stop it, transfer disk and turn it on. But why Live migration of CT was possible...
what does it mean "implement new restart migration"?
That now the live migration for LXC works?
Solved. It was a link agregation configuration server side.
I have routing problem. No one of my VM and CT can go out after the latest upgrade.
root@bluehub-prox02:~# pveversion -v