Hello,
I'm trying to install a cluster with Proxmox VE 4.4 with different interfaces (1 for management, 1 for service, 1 for ha and 1 for storage).
When I try to create a cluster with pvecm, I have the next error:
Command "systemctl status corosync" doesn't show information about issue and "journalctl" show the next logs:
If I remove the cluster configuration and re-create the cluster with the management ip, all works fine:
Anyone can help? I want the cluster communication by HA interface, not by Management Interface.
I'm trying to install a cluster with Proxmox VE 4.4 with different interfaces (1 for management, 1 for service, 1 for ha and 1 for storage).
When I try to create a cluster with pvecm, I have the next error:
root@proxmox2:~# pvecm create satecprod -bindnet0_addr 10.13.3.102
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/urandom.
Writing corosync key to /etc/corosync/authkey.
Job for corosync.service failed. See 'systemctl status corosync.service' and 'journalctl -xn' for details.
command 'systemctl restart corosync' failed: exit code 1
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/urandom.
Writing corosync key to /etc/corosync/authkey.
Job for corosync.service failed. See 'systemctl status corosync.service' and 'journalctl -xn' for details.
command 'systemctl restart corosync' failed: exit code 1
Command "systemctl status corosync" doesn't show information about issue and "journalctl" show the next logs:
Feb 14 18:13:03 proxmox2 corosync[5379]: [TOTEM ] The network interface [10.13.3.102] is now up.
Feb 14 18:13:03 proxmox2 corosync[5379]: [SERV ] Service engine loaded: corosync configuration map access [0]
Feb 14 18:13:03 proxmox2 corosync[5379]: [QB ] server name: cmap
Feb 14 18:13:03 proxmox2 corosync[5379]: [SERV ] Service engine loaded: corosync configuration service [1]
Feb 14 18:13:03 proxmox2 corosync[5379]: [QB ] server name: cfg
Feb 14 18:13:03 proxmox2 corosync[5379]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.
Feb 14 18:13:03 proxmox2 corosync[5379]: [QB ] server name: cpg
Feb 14 18:13:03 proxmox2 corosync[5379]: [SERV ] Service engine loaded: corosync profile loading service [4]
Feb 14 18:13:03 proxmox2 corosync[5379]: [QUORUM] Using quorum provider corosync_votequorum
Feb 14 18:13:03 proxmox2 corosync[5379]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
Feb 14 18:13:03 proxmox2 corosync[5379]: [SERV ] Service engine 'corosync_quorum' failed to load for reason 'configuratio
Feb 14 18:13:03 proxmox2 corosync[5379]: [MAIN ] Corosync Cluster Engine exiting with status 20 at service.c:356.
Feb 14 18:13:05 proxmox2 pve-ha-crm[2941]: ipcc_send_rec failed: Transport endpoint is not connected
Feb 14 18:13:08 proxmox2 pmxcfs[5358]: [quorum] crit: quorum_initialize failed: 2
Feb 14 18:13:08 proxmox2 pmxcfs[5358]: [confdb] crit: cmap_initialize failed: 2
Feb 14 18:13:08 proxmox2 pmxcfs[5358]: [dcdb] crit: cpg_initialize failed: 2
Feb 14 18:13:08 proxmox2 pmxcfs[5358]: [status] crit: cpg_initialize failed: 2
Feb 14 18:13:09 proxmox2 pvestatd[2922]: ipcc_send_rec failed: Transport endpoint is not connected
Feb 14 18:13:14 proxmox2 pmxcfs[5358]: [quorum] crit: quorum_initialize failed: 2
Feb 14 18:13:14 proxmox2 pmxcfs[5358]: [confdb] crit: cmap_initialize failed: 2
Feb 14 18:13:14 proxmox2 pmxcfs[5358]: [dcdb] crit: cpg_initialize failed: 2
Feb 14 18:13:14 proxmox2 pmxcfs[5358]: [status] crit: cpg_initialize failed: 2
Feb 14 18:13:03 proxmox2 corosync[5379]: [SERV ] Service engine loaded: corosync configuration map access [0]
Feb 14 18:13:03 proxmox2 corosync[5379]: [QB ] server name: cmap
Feb 14 18:13:03 proxmox2 corosync[5379]: [SERV ] Service engine loaded: corosync configuration service [1]
Feb 14 18:13:03 proxmox2 corosync[5379]: [QB ] server name: cfg
Feb 14 18:13:03 proxmox2 corosync[5379]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.
Feb 14 18:13:03 proxmox2 corosync[5379]: [QB ] server name: cpg
Feb 14 18:13:03 proxmox2 corosync[5379]: [SERV ] Service engine loaded: corosync profile loading service [4]
Feb 14 18:13:03 proxmox2 corosync[5379]: [QUORUM] Using quorum provider corosync_votequorum
Feb 14 18:13:03 proxmox2 corosync[5379]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
Feb 14 18:13:03 proxmox2 corosync[5379]: [SERV ] Service engine 'corosync_quorum' failed to load for reason 'configuratio
Feb 14 18:13:03 proxmox2 corosync[5379]: [MAIN ] Corosync Cluster Engine exiting with status 20 at service.c:356.
Feb 14 18:13:05 proxmox2 pve-ha-crm[2941]: ipcc_send_rec failed: Transport endpoint is not connected
Feb 14 18:13:08 proxmox2 pmxcfs[5358]: [quorum] crit: quorum_initialize failed: 2
Feb 14 18:13:08 proxmox2 pmxcfs[5358]: [confdb] crit: cmap_initialize failed: 2
Feb 14 18:13:08 proxmox2 pmxcfs[5358]: [dcdb] crit: cpg_initialize failed: 2
Feb 14 18:13:08 proxmox2 pmxcfs[5358]: [status] crit: cpg_initialize failed: 2
Feb 14 18:13:09 proxmox2 pvestatd[2922]: ipcc_send_rec failed: Transport endpoint is not connected
Feb 14 18:13:14 proxmox2 pmxcfs[5358]: [quorum] crit: quorum_initialize failed: 2
Feb 14 18:13:14 proxmox2 pmxcfs[5358]: [confdb] crit: cmap_initialize failed: 2
Feb 14 18:13:14 proxmox2 pmxcfs[5358]: [dcdb] crit: cpg_initialize failed: 2
Feb 14 18:13:14 proxmox2 pmxcfs[5358]: [status] crit: cpg_initialize failed: 2
If I remove the cluster configuration and re-create the cluster with the management ip, all works fine:
root@proxmox2:~# pvecm create satecprod
root@proxmox2:~# pvecm status
Quorum information
------------------
Date: Tue Feb 14 17:19:32 2017
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000001
Ring ID: 1/4
Quorate: Yes
Votequorum information
----------------------
Expected votes: 1
Highest expected: 1
Total votes: 1
Quorum: 1
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.13.1.102 (local)
root@proxmox2:~# pvecm status
Quorum information
------------------
Date: Tue Feb 14 17:19:32 2017
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000001
Ring ID: 1/4
Quorate: Yes
Votequorum information
----------------------
Expected votes: 1
Highest expected: 1
Total votes: 1
Quorum: 1
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.13.1.102 (local)
Anyone can help? I want the cluster communication by HA interface, not by Management Interface.