Liebe Community, ich hoffe ihr könnt mir helfe. Ich sitze seit mehren Tagen an folgendem Problem,
ich habe 2 Servern auf denen Proxmox läuft (pve-manager/5.2-9/4b30e8f9 (running kernel: 4.15.18-5-pve)) ich habe auf node1 (hermes) ein Cluster erstellt und dann versucht den node2 (nike) in dieses Cluster zu adden. Leider schläg dies immer fehl.
Die beiden Server sind via Vlan verbunden und laut omping ist multicast möglich.
hermes (10.8.0.1)
nike(10.8.06)
mit diesen daten sind die Nodes auch in den hosts datein eingetragen.
corosync.conf
systemctl status corosync.service
pvecm add
omping
journalctl -xe
ich habe 2 Servern auf denen Proxmox läuft (pve-manager/5.2-9/4b30e8f9 (running kernel: 4.15.18-5-pve)) ich habe auf node1 (hermes) ein Cluster erstellt und dann versucht den node2 (nike) in dieses Cluster zu adden. Leider schläg dies immer fehl.
Die beiden Server sind via Vlan verbunden und laut omping ist multicast möglich.
hermes (10.8.0.1)
nike(10.8.06)
mit diesen daten sind die Nodes auch in den hosts datein eingetragen.
corosync.conf
Code:
logging {
debug: off
to_syslog: yes
}
nodelist {
node {
name: hermes
nodeid: 1
quorum_votes: 1
ring0_addr: 10.8.0.1
}
node {
name: nike
nodeid: 2
quorum_votes: 1
ring0_addr: 10.8.0.6
}
}
quorum {
provider: corosync_votequorum
}
totem {
cluster_name: brainClust
config_version: 7
interface {
bindnetaddr: 10.8.0.1
ringnumber: 0
member {
memberaddr: 10.8.0.1
}
member {
memberaddr: 10.8.0.6
}
}
ip_version: ipv4
secauth: on
version: 2
}
systemctl status corosync.service
Code:
● corosync.service - Corosync Cluster Engine
Loaded: loaded (/lib/systemd/system/corosync.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2018-10-02 04:57:18 UTC; 4min 51s ago
Docs: man:corosync
man:corosync.conf
man:corosync_overview
Process: 3544 ExecStart=/usr/sbin/corosync -f $COROSYNC_OPTIONS (code=exited, status=20)
Main PID: 3544 (code=exited, status=20)
CPU: 45ms
Oct 02 04:57:18 nike corosync[3544]: info [WD ] no resources configured.
Oct 02 04:57:18 nike corosync[3544]: notice [SERV ] Service engine loaded: corosync watchdog service [7]
Oct 02 04:57:18 nike corosync[3544]: notice [QUORUM] Using quorum provider corosync_votequorum
Oct 02 04:57:18 nike corosync[3544]: crit [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
Oct 02 04:57:18 nike corosync[3544]: error [SERV ] Service engine 'corosync_quorum' failed to load for reason 'con
Oct 02 04:57:18 nike corosync[3544]: error [MAIN ] Corosync Cluster Engine exiting with status 20 at service.c:356
Oct 02 04:57:18 nike systemd[1]: corosync.service: Main process exited, code=exited, status=20/n/a
Oct 02 04:57:18 nike systemd[1]: Failed to start Corosync Cluster Engine.
Oct 02 04:57:18 nike systemd[1]: corosync.service: Unit entered failed state.
Oct 02 04:57:18 nike systemd[1]: corosync.service: Failed with result 'exit-code'.
pvecm add
Code:
Are you sure you want to continue connecting (yes/no)? yes
Login succeeded.
Request addition of this node
Join request OK, finishing setup locally
stopping pve-cluster service
backup old database to '/var/lib/pve-cluster/backup/config-1538456867.sql.gz'
delete old backup '/var/lib/pve-cluster/backup/config-1538415458.sql.gz'
Job for corosync.service failed because the control process exited with error code.
starting pve-cluster failed: See "systemctl status corosync.service" and "journalctl -xe" for details.
root@nike:/etc/pve#
omping
Code:
root@nike:/etc/pve# omping nike hermes
hermes : waiting for response msg
hermes : waiting for response msg
hermes : waiting for response msg
hermes : waiting for response msg
hermes : joined (S,G) = (*, 232.43.211.234), pinging
hermes : unicast, seq=1, size=69 bytes, dist=0, time=1.068ms
hermes : multicast, seq=1, size=69 bytes, dist=0, time=1.087ms
journalctl -xe
Code:
Oct 02 05:11:36 nike pveproxy[1781]: worker 5763 started
Oct 02 05:11:36 nike pveproxy[1781]: worker 5764 started
Oct 02 05:11:36 nike pveproxy[5763]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) a