Neither that nor /var/lib/ceph/mgr does exist.
It seems that by some magical reason the ceph-mgr package doesn't work unless it's actually installed. :-]
The moral of the story, though, is that pveceph should verify the directory existence (and thus the existence of the installed package)...
root@bowie:~# pveceph createmgr
creating manager directory '/var/lib/ceph/mgr/ceph-bowie'
creating keys for 'mgr.bowie'
unable to open file '/var/lib/ceph/mgr/ceph-bowie/keyring.tmp.24284' - No such file or directory
Ah, damn, foreground puts the output on the console, but background doesn't capture them in the logfile. Stooopid!
Thanks!
Seems like the container is missing CAP_MKNOD, and the systemd (be it damned in the fires of hell forever) autodev feature is not used. That's a quite serious bug: no newly...
[proxmox5]
Newly created unprivileged lxc container fails to start. The failure is rather ugly, since there is basically no info on it:
Aug 16 00:25:25 elton lxc-start[39248]: lxc-start: tools/lxc_start.c: main: 366 The container failed to start.
Aug 16 00:25:25 elton lxc-start[39248]...
Apart from that there has been no real problems so far upgrading 4.3 + ceph jewel to 5.0 + lumi (12.1.0). I even let all the stuff running on the server bing in-place upgraded, no problem observed, they were moved away before reboot.
Why, proudly Freddy and Elton! ;-)
[Someone joked about that along the line "they would fire me when I would name the servers as…" and I was, like, fuck yeah I can do that. ;-)]
Okay, I said that your advice is obviously bullshit, why would the display(!!) setting matter when piping over the...
This doesn't seem to be the already mentioned ssh problem (which is at 1:7.4p1-10 anyway):
Jul 11 18:34:38 copying disk images
Jul 11 18:34:38 starting VM 103 on remote node 'bowie'
Jul 11 18:34:40 start remote tunnel
Jul 11 18:34:40 starting online/live migration on...
You are possibly using machine translation, and it doesn't work well. But try to open a new forum thread and tell me where is it, and I try to read it.
As we have already talked about reboots, here's one fresh. From 4.4-5 to 4.4-13. Reboot is at the end.
Apr 12 14:55:47 srv-01-szd systemd[1]: Stopped Corosync Cluster Engine.
Apr 12 14:55:47 srv-01-szd systemd[1]: Starting Corosync Cluster Engine...
Apr 12 14:55:47 srv-01-szd corosync[26358]...
Last time I tried to follow where it hangs and here's an strace fragment:
10:23:06.155779 socket(PF_LOCAL, SOCK_STREAM, 0) = 3
10:23:06.155798 fcntl(3, F_GETFD) = 0
10:23:06.155811 fcntl(3, F_SETFD, FD_CLOEXEC) = 0
10:23:06.155824 fcntl(3, F_SETFL, O_RDONLY|O_NONBLOCK) = 0...
1) If I dist-upgrade it and the disk is not magically fast there may be more than 60 seconds between upgrade start (stopping daemon) and setup finish (starting daemon), which causes reboot. When I upgrade all 3 node at once corosync may lose quorum for many minutes.
Why? I have asked the same...
But the easiest example for you: upgrading the hosts. If you upgrade them at once, pve-cluster (and relared pkgs) upgrade almost always trigger a reboot. Today's wasn't that one but we have had full cluster reboots due to upgrades in the past way too many times.
Oh please, don't. I can detail you how it can f*ck itself over big time, apart from the bug you fixed around 4.4, but really, the question wasn't that why do I need it, but that when I do need it how should it be done.
[Guess what, today another node rebooted since that rotten systemd didn't...
Also not that CephFS is considered 'experimental' by Ceph, and no guarantees that it won't crash or eat your data for breakfast. (At least last time I've checked.)
*update* jewel announcement says "This is the first release in which CephFS is declared stable! Several features are disabled by...
What is the preferred and safe method to prevent a node from rebooting due to (observed) loss of quorum? Or in other words: how to temporarily disable fencing on a node?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.