Connection failure. Network error or Proxmox VE services not running?

r4a5a88

Renowned Member
Jun 15, 2016
63
3
73
36
Hi,
ich kann mich nicht auf der Proxmox Oberfläche nicht anmelden.
ich hatte das Problem schon mal und habe versucht wie bei letzten mal durch neustat von corosync zu lösen. Es ging nicht.

pvedaemon
Code:
pro-07-dmed:~# systemctl status pvedaemon.service
â pvedaemon.service - PVE API Daemon
   Loaded: loaded (/lib/systemd/system/pvedaemon.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2020-09-21 15:13:09 CEST; 1 day 21h ago
  Process: 19876 ExecStart=/usr/bin/pvedaemon start (code=exited, status=0/SUCCESS)
  Process: 4309 ExecReload=/usr/bin/pvedaemon restart (code=exited, status=0/SUCCESS)
 Main PID: 19947 (pvedaemon)
    Tasks: 4 (limit: 6143)
   Memory: 145.0M
   CGroup: /system.slice/pvedaemon.service
           ââ 4319 pvedaemon worker
           ââ 4320 pvedaemon worker
           ââ 4321 pvedaemon worker
           ââ19947 pvedaemon

Sep 22 14:24:41 pro-07-dmed pvedaemon[19949]: worker exit
Sep 22 14:24:41 pro-07-dmed pvedaemon[19953]: worker exit
Sep 22 14:24:41 pro-07-dmed pvedaemon[19952]: worker exit
Sep 22 14:24:41 pro-07-dmed pvedaemon[19947]: worker 19952 finished
Sep 22 14:24:41 pro-07-dmed pvedaemon[19947]: worker 19953 finished
Sep 22 14:24:41 pro-07-dmed pvedaemon[19947]: worker 19949 finished
Sep 23 01:24:55 pro-07-dmed pvedaemon[4321]: authentication failure; rhost=91.5.93.196 user=root@pam msg=error during cfs-locked 'authkey' operation: got lock request timeout
Sep 23 01:24:55 pro-07-dmed pvedaemon[4320]: authentication failure; rhost=129.206.88.223 user=root@pam msg=error during cfs-locked 'authkey' operation: got lock request timeout
Sep 23 10:37:26 pro-07-dmed pvedaemon[4321]: authentication failure; rhost=93.202.118.174 user=root@pam msg=error during cfs-locked 'authkey' operation: got lock request timeout
Sep 23 10:37:26 pro-07-dmed pvedaemon[4319]: authentication failure; rhost=91.5.93.224 user=root@pam msg=error during cfs-locked 'authkey' operation: got lock request timeout
corosync
Code:
pro-07-dmed:~# systemctl status corosync.service
â corosync.service - Corosync Cluster Engine
   Loaded: loaded (/lib/systemd/system/corosync.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2020-09-22 16:34:47 CEST; 20h ago
     Docs: man:corosync
           man:corosync.conf
           man:corosync_overview
 Main PID: 10000 (corosync)
    Tasks: 9 (limit: 6143)
   Memory: 362.3M
   CGroup: /system.slice/corosync.service
           ââ10000 /usr/sbin/corosync -f

Sep 23 12:44:38 pro-07-dmed corosync[10000]:   [TOTEM ] A new membership (1.8a077) was formed. Members
Sep 23 12:44:38 pro-07-dmed corosync[10000]:   [TOTEM ] A new membership (1.8a07b) was formed. Members joined: 2 4 7
Sep 23 12:44:38 pro-07-dmed corosync[10000]:   [TOTEM ] Retransmit List: 1
Sep 23 12:44:38 pro-07-dmed corosync[10000]:   [TOTEM ] Retransmit List: 1b
Sep 23 12:44:38 pro-07-dmed corosync[10000]:   [TOTEM ] Retransmit List: 21
Sep 23 12:44:38 pro-07-dmed corosync[10000]:   [QUORUM] Members[7]: 1 2 3 4 5 6 7
Sep 23 12:44:38 pro-07-dmed corosync[10000]:   [MAIN  ] Completed service synchronization, ready to provide service.
Sep 23 12:44:38 pro-07-dmed corosync[10000]:   [TOTEM ] Retransmit List: 29
Sep 23 12:44:38 pro-07-dmed corosync[10000]:   [TOTEM ] Retransmit List: c8 c9 ca cb cd ce d0 d1
Sep 23 12:44:38 pro-07-dmed corosync[10000]:   [TOTEM ] Retransmit List: d6 d
etc hosts
Code:
  1 #127.0.0.1 localhost.localdomain localhost
  2 129.206.229.168 pro-07-dmed.uni-heidelberg.de pro-07-dmed pvelocalhost
  3 129.206.229.187 pro-01-dmed.uni-heidelberg.de pro-01-dmed
  4 129.206.229.185 pro-03-dmed.uni-heidelberg.de pro-03-dmed
  5 129.206.229.164 pro-04-dmed.uni-heidelberg.de pro-04-dmed
  6 129.206.229.178 pro-05-dmed.uni-heidelberg.de pro-05-dmed
  7 129.206.229.173 pro-06-dmed.uni-heidelberg.de pro-06-dmed
  8 129.206.229.186 pro-08-dmed.uni-heidelberg.de pro-08-dmed
  9

pvecluster
Code:
pro-07-dmed:~# systemctl status pve-cluster.service
â pve-cluster.service - The Proxmox VE cluster filesystem
   Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2020-09-22 16:29:04 CEST; 20h ago
  Process: 5031 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
 Main PID: 5033 (pmxcfs)
    Tasks: 8 (limit: 6143)
   Memory: 56.5M
   CGroup: /system.slice/pve-cluster.service
           ââ5033 /usr/bin/pmxcfs

Sep 23 12:44:38 pro-07-dmed pmxcfs[5033]: [dcdb] notice: received sync request (epoch 1/1504/00001BFA)
Sep 23 12:44:38 pro-07-dmed pmxcfs[5033]: [status] notice: received sync request (epoch 1/1504/0000173C)
Sep 23 12:44:38 pro-07-dmed pmxcfs[5033]: [dcdb] crit: ignore sync request from wrong member 3/36492
Sep 23 12:44:38 pro-07-dmed pmxcfs[5033]: [dcdb] notice: received sync request (epoch 3/36492/0000081D)
Sep 23 12:44:38 pro-07-dmed pmxcfs[5033]: [status] crit: ignore sync request from wrong member 3/36492
Sep 23 12:44:38 pro-07-dmed pmxcfs[5033]: [status] notice: received sync request (epoch 3/36492/000007B3)
Sep 23 12:44:38 pro-07-dmed pmxcfs[5033]: [dcdb] crit: ignore sync request from wrong member 2/20462
Sep 23 12:44:38 pro-07-dmed pmxcfs[5033]: [dcdb] notice: received sync request (epoch 2/20462/0000003E)
Sep 23 12:44:38 pro-07-dmed pmxcfs[5033]: [status] crit: ignore sync request from wrong member 2/20462
Sep 23 12:44:38 pro-07-dmed pmxcfs[5033]: [status] notice: received sync request (epoch 2/20462/00000039)

Code:
pro-07-dmed:~# pvecm status
Cluster information
-------------------
Name:             vm-cluster-02
Config Version:   49
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Wed Sep 23 12:57:53 2020
Quorum provider:  corosync_votequorum
Nodes:            7
Node ID:          0x00000006
Ring ID:          1.8a07b
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   7
Highest expected: 7
Total votes:      7
Quorum:           4
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 129.206.229.185
0x00000002          1 129.206.229.164
0x00000003          1 129.206.229.173
0x00000004          1 129.206.229.187
0x00000005          1 129.206.229.178
0x00000006          1 129.206.229.168 (local)
0x00000007          1 129.206.229.186

es scheint falsch gesynced zu sein
was kann man tun ?
 
Code:
â pve-cluster.service - The Proxmox VE cluster filesystem
   Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2020-09-25 11:23:22 CEST; 18s ago
  Process: 21654 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
 Main PID: 21666 (pmxcfs)
    Tasks: 7 (limit: 6143)
   Memory: 17.4M
   CGroup: /system.slice/pve-cluster.service
           ââ21666 /usr/bin/pmxcfs

Sep 25 11:23:21 pro-07-dmed pmxcfs[21654]: [dcdb] notice: wrote new corosync config '/etc/corosync/corosync.conf' (version = 49)
Sep 25 11:23:21 pro-07-dmed pmxcfs[21666]: [status] notice: update cluster info (cluster name  vm-cluster-02, version = 49)
Sep 25 11:23:21 pro-07-dmed pmxcfs[21666]: [status] notice: node has quorum
Sep 25 11:23:21 pro-07-dmed pmxcfs[21666]: [dcdb] notice: members: 1/1504, 2/9848, 3/36492, 4/1484, 5/32030, 6/21666, 7/17087
Sep 25 11:23:21 pro-07-dmed pmxcfs[21666]: [dcdb] notice: starting data syncronisation
Sep 25 11:23:21 pro-07-dmed pmxcfs[21666]: [status] notice: members: 1/1504, 2/9848, 3/36492, 4/1484, 5/32030, 6/21666, 7/17087
Sep 25 11:23:21 pro-07-dmed pmxcfs[21666]: [status] notice: starting data syncronisation
Sep 25 11:23:21 pro-07-dmed pmxcfs[21666]: [dcdb] notice: received sync request (epoch 1/1504/00001F59)
Sep 25 11:23:21 pro-07-dmed pmxcfs[21666]: [status] notice: received sync request (epoch 1/1504/00001AA4)
Sep 25 11:23:22 pro-07-dmed systemd[1]: Started The Proxmox VE cluster filesystem.


Code:
nodelist {
  node {
    name: pro-01-dmed
    nodeid: 4
    quorum_votes: 1
    ring0_addr: pro-01-dmed
  }
  node {
    name: pro-03-dmed
    nodeid: 1
    quorum_votes: 1
    ring0_addr: pro-03-dmed
  }
  node {
    name: pro-04-dmed
    nodeid: 2
    quorum_votes: 1
    ring0_addr: pro-04-dmed
  }
  node {
    name: pro-05-dmed
    nodeid: 5
    quorum_votes: 1
    ring0_addr: pro-05-dmed
  }
  node {
    name: pro-06-dmed
    nodeid: 3
    quorum_votes: 1
    ring0_addr: pro-06-dmed
  }
  node {
    name: pro-07-dmed
    nodeid: 6
    quorum_votes: 1
    ring0_addr: pro-07-dmed
  }
  node {
    name: pro-08-dmed
    nodeid: 7
    quorum_votes: 1
    ring0_addr: 129.206.229.186
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: vm-cluster-02
  config_version: 49
  interface {
    bindnetaddr: 129.206.229.185
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
}

Teil sagt mir mein cluster das er kein quorum hat
ich hab versucht mit pvecm expected 1 es zu richten aber da bekomme ich ein Fehler
Unable to set expected votes: CS_ERR_INVALID_PARAM

was kann es sein ?
Könnt ihr mir helfen
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!